00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2385 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3650 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.181 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.181 The recommended git tool is: git 00:00:00.181 using credential 00000000-0000-0000-0000-000000000002 00:00:00.185 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.212 Fetching changes from the remote Git repository 00:00:00.214 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.236 Using shallow fetch with depth 1 00:00:00.236 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.236 > git --version # timeout=10 00:00:00.257 > git --version # 'git version 2.39.2' 00:00:00.257 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.271 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.271 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.027 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.039 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.051 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.051 > git config core.sparsecheckout # timeout=10 00:00:06.062 > git read-tree -mu HEAD # timeout=10 00:00:06.077 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.100 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.100 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.208 [Pipeline] Start of Pipeline 00:00:06.222 [Pipeline] library 00:00:06.224 Loading library shm_lib@master 00:00:06.224 Library shm_lib@master is cached. Copying from home. 00:00:06.240 [Pipeline] node 00:00:06.271 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.272 [Pipeline] { 00:00:06.281 [Pipeline] catchError 00:00:06.282 [Pipeline] { 00:00:06.291 [Pipeline] wrap 00:00:06.298 [Pipeline] { 00:00:06.303 [Pipeline] stage 00:00:06.305 [Pipeline] { (Prologue) 00:00:06.319 [Pipeline] echo 00:00:06.320 Node: VM-host-SM38 00:00:06.325 [Pipeline] cleanWs 00:00:06.334 [WS-CLEANUP] Deleting project workspace... 00:00:06.334 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.340 [WS-CLEANUP] done 00:00:06.524 [Pipeline] setCustomBuildProperty 00:00:06.607 [Pipeline] httpRequest 00:00:07.164 [Pipeline] echo 00:00:07.166 Sorcerer 10.211.164.20 is alive 00:00:07.174 [Pipeline] retry 00:00:07.176 [Pipeline] { 00:00:07.187 [Pipeline] httpRequest 00:00:07.191 HttpMethod: GET 00:00:07.192 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.193 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.211 Response Code: HTTP/1.1 200 OK 00:00:07.212 Success: Status code 200 is in the accepted range: 200,404 00:00:07.212 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.221 [Pipeline] } 00:00:10.241 [Pipeline] // retry 00:00:10.248 [Pipeline] sh 00:00:10.538 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.555 [Pipeline] httpRequest 00:00:10.914 [Pipeline] echo 00:00:10.915 Sorcerer 10.211.164.20 is alive 00:00:10.923 [Pipeline] retry 00:00:10.925 [Pipeline] { 00:00:10.938 [Pipeline] httpRequest 00:00:10.943 HttpMethod: GET 00:00:10.943 URL: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:10.944 Sending request to url: http://10.211.164.20/packages/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:00:10.957 Response Code: HTTP/1.1 200 OK 00:00:10.958 Success: Status code 200 is in the accepted range: 200,404 00:00:10.959 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:01:53.654 [Pipeline] } 00:01:53.672 [Pipeline] // retry 00:01:53.680 [Pipeline] sh 00:01:53.966 + tar --no-same-owner -xf spdk_557f022f641abf567fb02704f67857eb8f6d9ff3.tar.gz 00:01:57.277 [Pipeline] sh 00:01:57.562 + git -C spdk log --oneline -n5 00:01:57.562 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:01:57.562 c0b2ac5c9 bdev: Change void to bdev_io pointer of parameter of _bdev_io_submit() 00:01:57.562 92fb22519 dif: dif_generate/verify_copy() supports NVMe PRACT = 1 and MD size > PI size 00:01:57.562 79daf868a dif: Add SPDK_DIF_FLAGS_NVME_PRACT for dif_generate/verify_copy() 00:01:57.562 431baf1b5 dif: Insert abstraction into dif_generate/verify_copy() for NVMe PRACT 00:01:57.587 [Pipeline] withCredentials 00:01:57.600 > git --version # timeout=10 00:01:57.613 > git --version # 'git version 2.39.2' 00:01:57.632 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:57.635 [Pipeline] { 00:01:57.646 [Pipeline] retry 00:01:57.649 [Pipeline] { 00:01:57.667 [Pipeline] sh 00:01:57.952 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:57.966 [Pipeline] } 00:01:57.988 [Pipeline] // retry 00:01:57.994 [Pipeline] } 00:01:58.013 [Pipeline] // withCredentials 00:01:58.024 [Pipeline] httpRequest 00:01:58.340 [Pipeline] echo 00:01:58.342 Sorcerer 10.211.164.20 is alive 00:01:58.352 [Pipeline] retry 00:01:58.354 [Pipeline] { 00:01:58.368 [Pipeline] httpRequest 00:01:58.373 HttpMethod: GET 00:01:58.374 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:58.375 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:58.376 Response Code: HTTP/1.1 200 OK 00:01:58.377 Success: Status code 200 is in the accepted range: 200,404 00:01:58.377 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:01.756 [Pipeline] } 00:02:01.773 [Pipeline] // retry 00:02:01.779 [Pipeline] sh 00:02:02.059 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:03.475 [Pipeline] sh 00:02:03.801 + git -C dpdk log --oneline -n5 00:02:03.801 caf0f5d395 version: 22.11.4 00:02:03.801 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:03.801 dc9c799c7d vhost: fix missing spinlock unlock 00:02:03.801 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:03.801 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:03.821 [Pipeline] writeFile 00:02:03.838 [Pipeline] sh 00:02:04.125 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:04.138 [Pipeline] sh 00:02:04.423 + cat autorun-spdk.conf 00:02:04.423 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.423 SPDK_TEST_NVME=1 00:02:04.423 SPDK_TEST_FTL=1 00:02:04.423 SPDK_TEST_ISAL=1 00:02:04.423 SPDK_RUN_ASAN=1 00:02:04.423 SPDK_RUN_UBSAN=1 00:02:04.423 SPDK_TEST_XNVME=1 00:02:04.423 SPDK_TEST_NVME_FDP=1 00:02:04.423 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:04.423 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:04.423 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.430 RUN_NIGHTLY=1 00:02:04.432 [Pipeline] } 00:02:04.446 [Pipeline] // stage 00:02:04.462 [Pipeline] stage 00:02:04.464 [Pipeline] { (Run VM) 00:02:04.477 [Pipeline] sh 00:02:04.765 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:04.765 + echo 'Start stage prepare_nvme.sh' 00:02:04.765 Start stage prepare_nvme.sh 00:02:04.765 + [[ -n 5 ]] 00:02:04.765 + disk_prefix=ex5 00:02:04.765 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:04.765 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:04.765 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:04.765 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.765 ++ SPDK_TEST_NVME=1 00:02:04.765 ++ SPDK_TEST_FTL=1 00:02:04.765 ++ SPDK_TEST_ISAL=1 00:02:04.765 ++ SPDK_RUN_ASAN=1 00:02:04.765 ++ SPDK_RUN_UBSAN=1 00:02:04.765 ++ SPDK_TEST_XNVME=1 00:02:04.765 ++ SPDK_TEST_NVME_FDP=1 00:02:04.765 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:04.765 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:04.765 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.765 ++ RUN_NIGHTLY=1 00:02:04.765 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:04.765 + nvme_files=() 00:02:04.765 + declare -A nvme_files 00:02:04.765 + backend_dir=/var/lib/libvirt/images/backends 00:02:04.765 + nvme_files['nvme.img']=5G 00:02:04.765 + nvme_files['nvme-cmb.img']=5G 00:02:04.765 + nvme_files['nvme-multi0.img']=4G 00:02:04.765 + nvme_files['nvme-multi1.img']=4G 00:02:04.765 + nvme_files['nvme-multi2.img']=4G 00:02:04.765 + nvme_files['nvme-openstack.img']=8G 00:02:04.765 + nvme_files['nvme-zns.img']=5G 00:02:04.765 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:04.765 + (( SPDK_TEST_FTL == 1 )) 00:02:04.765 + nvme_files["nvme-ftl.img"]=6G 00:02:04.765 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:04.765 + nvme_files["nvme-fdp.img"]=1G 00:02:04.765 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:04.765 + for nvme in "${!nvme_files[@]}" 00:02:04.765 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:02:05.026 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:05.026 + for nvme in "${!nvme_files[@]}" 00:02:05.026 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:02:05.970 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:05.970 + for nvme in "${!nvme_files[@]}" 00:02:05.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:02:05.970 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.970 + for nvme in "${!nvme_files[@]}" 00:02:05.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:02:05.970 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:05.970 + for nvme in "${!nvme_files[@]}" 00:02:05.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:02:05.970 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:05.970 + for nvme in "${!nvme_files[@]}" 00:02:05.970 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:02:06.231 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:06.231 + for nvme in "${!nvme_files[@]}" 00:02:06.231 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:02:06.804 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:06.804 + for nvme in "${!nvme_files[@]}" 00:02:06.804 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:02:07.065 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:07.065 + for nvme in "${!nvme_files[@]}" 00:02:07.065 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:02:08.007 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.007 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:02:08.007 + echo 'End stage prepare_nvme.sh' 00:02:08.007 End stage prepare_nvme.sh 00:02:08.019 [Pipeline] sh 00:02:08.305 + DISTRO=fedora39 00:02:08.305 + CPUS=10 00:02:08.305 + RAM=12288 00:02:08.305 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:08.305 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:08.305 00:02:08.305 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:08.305 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:08.305 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:08.305 HELP=0 00:02:08.305 DRY_RUN=0 00:02:08.305 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:02:08.305 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:08.305 NVME_AUTO_CREATE=0 00:02:08.305 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:02:08.305 NVME_CMB=,,,, 00:02:08.305 NVME_PMR=,,,, 00:02:08.305 NVME_ZNS=,,,, 00:02:08.305 NVME_MS=true,,,, 00:02:08.305 NVME_FDP=,,,on, 00:02:08.305 SPDK_VAGRANT_DISTRO=fedora39 00:02:08.305 SPDK_VAGRANT_VMCPU=10 00:02:08.305 SPDK_VAGRANT_VMRAM=12288 00:02:08.305 SPDK_VAGRANT_PROVIDER=libvirt 00:02:08.305 SPDK_VAGRANT_HTTP_PROXY= 00:02:08.305 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:08.305 SPDK_OPENSTACK_NETWORK=0 00:02:08.305 VAGRANT_PACKAGE_BOX=0 00:02:08.305 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:08.305 FORCE_DISTRO=true 00:02:08.305 VAGRANT_BOX_VERSION= 00:02:08.305 EXTRA_VAGRANTFILES= 00:02:08.305 NIC_MODEL=e1000 00:02:08.305 00:02:08.305 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:08.305 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:10.211 Bringing machine 'default' up with 'libvirt' provider... 00:02:10.782 ==> default: Creating image (snapshot of base box volume). 00:02:11.043 ==> default: Creating domain with the following settings... 00:02:11.043 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732135468_c3248fccd26491c89735 00:02:11.043 ==> default: -- Domain type: kvm 00:02:11.043 ==> default: -- Cpus: 10 00:02:11.043 ==> default: -- Feature: acpi 00:02:11.043 ==> default: -- Feature: apic 00:02:11.043 ==> default: -- Feature: pae 00:02:11.043 ==> default: -- Memory: 12288M 00:02:11.043 ==> default: -- Memory Backing: hugepages: 00:02:11.043 ==> default: -- Management MAC: 00:02:11.043 ==> default: -- Loader: 00:02:11.043 ==> default: -- Nvram: 00:02:11.043 ==> default: -- Base box: spdk/fedora39 00:02:11.043 ==> default: -- Storage pool: default 00:02:11.043 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732135468_c3248fccd26491c89735.img (20G) 00:02:11.043 ==> default: -- Volume Cache: default 00:02:11.043 ==> default: -- Kernel: 00:02:11.043 ==> default: -- Initrd: 00:02:11.043 ==> default: -- Graphics Type: vnc 00:02:11.043 ==> default: -- Graphics Port: -1 00:02:11.043 ==> default: -- Graphics IP: 127.0.0.1 00:02:11.043 ==> default: -- Graphics Password: Not defined 00:02:11.043 ==> default: -- Video Type: cirrus 00:02:11.043 ==> default: -- Video VRAM: 9216 00:02:11.043 ==> default: -- Sound Type: 00:02:11.043 ==> default: -- Keymap: en-us 00:02:11.043 ==> default: -- TPM Path: 00:02:11.043 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:11.043 ==> default: -- Command line args: 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:11.043 ==> default: -> value=-drive, 00:02:11.043 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:11.043 ==> default: -> value=-drive, 00:02:11.043 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:11.043 ==> default: -> value=-drive, 00:02:11.043 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:11.043 ==> default: -> value=-drive, 00:02:11.043 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:11.043 ==> default: -> value=-drive, 00:02:11.043 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:11.043 ==> default: -> value=-drive, 00:02:11.043 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:11.043 ==> default: -> value=-device, 00:02:11.043 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:11.043 ==> default: Creating shared folders metadata... 00:02:11.043 ==> default: Starting domain. 00:02:12.951 ==> default: Waiting for domain to get an IP address... 00:02:35.019 ==> default: Waiting for SSH to become available... 00:02:35.019 ==> default: Configuring and enabling network interfaces... 00:02:35.961 default: SSH address: 192.168.121.49:22 00:02:35.961 default: SSH username: vagrant 00:02:35.961 default: SSH auth method: private key 00:02:37.876 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:43.161 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:48.427 ==> default: Mounting SSHFS shared folder... 00:02:48.996 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:48.996 ==> default: Checking Mount.. 00:02:50.374 ==> default: Folder Successfully Mounted! 00:02:50.374 00:02:50.374 SUCCESS! 00:02:50.374 00:02:50.374 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:50.374 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:50.374 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:50.374 00:02:50.383 [Pipeline] } 00:02:50.398 [Pipeline] // stage 00:02:50.408 [Pipeline] dir 00:02:50.408 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:50.411 [Pipeline] { 00:02:50.433 [Pipeline] catchError 00:02:50.435 [Pipeline] { 00:02:50.453 [Pipeline] sh 00:02:50.733 + sed -ne '/^Host/,$p' 00:02:50.733 + vagrant ssh-config --host vagrant 00:02:50.733 + tee ssh_conf 00:02:53.276 Host vagrant 00:02:53.276 HostName 192.168.121.49 00:02:53.276 User vagrant 00:02:53.276 Port 22 00:02:53.276 UserKnownHostsFile /dev/null 00:02:53.276 StrictHostKeyChecking no 00:02:53.276 PasswordAuthentication no 00:02:53.276 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:53.276 IdentitiesOnly yes 00:02:53.276 LogLevel FATAL 00:02:53.276 ForwardAgent yes 00:02:53.276 ForwardX11 yes 00:02:53.276 00:02:53.292 [Pipeline] withEnv 00:02:53.295 [Pipeline] { 00:02:53.309 [Pipeline] sh 00:02:53.645 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:53.645 source /etc/os-release 00:02:53.645 [[ -e /image.version ]] && img=$(< /image.version) 00:02:53.645 # Minimal, systemd-like check. 00:02:53.645 if [[ -e /.dockerenv ]]; then 00:02:53.645 # Clear garbage from the node'\''s name: 00:02:53.645 # agt-er_autotest_547-896 -> autotest_547-896 00:02:53.645 # $HOSTNAME is the actual container id 00:02:53.645 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:53.645 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:53.645 # We can assume this is a mount from a host where container is running, 00:02:53.645 # so fetch its hostname to easily identify the target swarm worker. 00:02:53.645 container="$(< /etc/hostname) ($agent)" 00:02:53.645 else 00:02:53.645 # Fallback 00:02:53.645 container=$agent 00:02:53.645 fi 00:02:53.645 fi 00:02:53.645 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:53.645 ' 00:02:53.656 [Pipeline] } 00:02:53.669 [Pipeline] // withEnv 00:02:53.677 [Pipeline] setCustomBuildProperty 00:02:53.689 [Pipeline] stage 00:02:53.691 [Pipeline] { (Tests) 00:02:53.707 [Pipeline] sh 00:02:53.996 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:54.273 [Pipeline] sh 00:02:54.553 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:54.569 [Pipeline] timeout 00:02:54.569 Timeout set to expire in 50 min 00:02:54.571 [Pipeline] { 00:02:54.585 [Pipeline] sh 00:02:54.870 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:55.131 HEAD is now at 557f022f6 bdev: Change 1st parameter of bdev_bytes_to_blocks from bdev to desc 00:02:55.143 [Pipeline] sh 00:02:55.426 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:55.703 [Pipeline] sh 00:02:55.986 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:56.264 [Pipeline] sh 00:02:56.548 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:56.809 ++ readlink -f spdk_repo 00:02:56.809 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:56.809 + [[ -n /home/vagrant/spdk_repo ]] 00:02:56.809 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:56.809 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:56.809 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:56.809 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:56.809 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:56.809 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:56.809 + cd /home/vagrant/spdk_repo 00:02:56.809 + source /etc/os-release 00:02:56.809 ++ NAME='Fedora Linux' 00:02:56.809 ++ VERSION='39 (Cloud Edition)' 00:02:56.809 ++ ID=fedora 00:02:56.809 ++ VERSION_ID=39 00:02:56.809 ++ VERSION_CODENAME= 00:02:56.809 ++ PLATFORM_ID=platform:f39 00:02:56.809 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:56.809 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:56.809 ++ LOGO=fedora-logo-icon 00:02:56.809 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:56.809 ++ HOME_URL=https://fedoraproject.org/ 00:02:56.809 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:56.809 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:56.809 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:56.809 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:56.809 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:56.809 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:56.809 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:56.809 ++ SUPPORT_END=2024-11-12 00:02:56.809 ++ VARIANT='Cloud Edition' 00:02:56.809 ++ VARIANT_ID=cloud 00:02:56.809 + uname -a 00:02:56.809 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:56.809 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:57.070 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:57.332 Hugepages 00:02:57.332 node hugesize free / total 00:02:57.332 node0 1048576kB 0 / 0 00:02:57.332 node0 2048kB 0 / 0 00:02:57.332 00:02:57.332 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:57.332 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:57.332 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:57.332 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:57.332 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:57.332 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:57.332 + rm -f /tmp/spdk-ld-path 00:02:57.332 + source autorun-spdk.conf 00:02:57.332 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.332 ++ SPDK_TEST_NVME=1 00:02:57.332 ++ SPDK_TEST_FTL=1 00:02:57.332 ++ SPDK_TEST_ISAL=1 00:02:57.332 ++ SPDK_RUN_ASAN=1 00:02:57.332 ++ SPDK_RUN_UBSAN=1 00:02:57.332 ++ SPDK_TEST_XNVME=1 00:02:57.332 ++ SPDK_TEST_NVME_FDP=1 00:02:57.332 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:57.332 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:57.332 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.332 ++ RUN_NIGHTLY=1 00:02:57.332 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:57.332 + [[ -n '' ]] 00:02:57.332 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:57.332 + for M in /var/spdk/build-*-manifest.txt 00:02:57.332 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:57.332 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.332 + for M in /var/spdk/build-*-manifest.txt 00:02:57.332 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:57.332 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.332 + for M in /var/spdk/build-*-manifest.txt 00:02:57.332 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:57.332 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:57.332 ++ uname 00:02:57.332 + [[ Linux == \L\i\n\u\x ]] 00:02:57.332 + sudo dmesg -T 00:02:57.332 + sudo dmesg --clear 00:02:57.332 + dmesg_pid=5762 00:02:57.332 + [[ Fedora Linux == FreeBSD ]] 00:02:57.332 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.332 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:57.332 + sudo dmesg -Tw 00:02:57.332 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:57.332 + [[ -x /usr/src/fio-static/fio ]] 00:02:57.332 + export FIO_BIN=/usr/src/fio-static/fio 00:02:57.332 + FIO_BIN=/usr/src/fio-static/fio 00:02:57.332 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:57.332 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:57.332 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:57.332 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.332 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:57.332 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:57.332 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.332 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:57.332 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.593 20:45:15 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:57.593 20:45:15 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.593 20:45:15 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:57.593 20:45:15 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:57.593 20:45:15 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:57.593 20:45:15 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:57.593 20:45:15 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:57.593 20:45:15 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:57.593 20:45:15 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:57.594 20:45:15 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:57.594 20:45:15 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:57.594 20:45:15 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:57.594 20:45:15 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:57.594 20:45:15 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:57.594 20:45:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:57.594 20:45:15 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:57.594 20:45:15 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:57.594 20:45:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:57.594 20:45:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:57.594 20:45:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:57.594 20:45:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:57.594 20:45:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:57.594 20:45:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.594 20:45:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.594 20:45:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.594 20:45:15 -- paths/export.sh@5 -- $ export PATH 00:02:57.594 20:45:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:57.594 20:45:15 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:57.594 20:45:15 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:57.594 20:45:15 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732135515.XXXXXX 00:02:57.594 20:45:15 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732135515.23fSis 00:02:57.594 20:45:15 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:57.594 20:45:15 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:57.594 20:45:15 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:57.594 20:45:15 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:57.594 20:45:15 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:57.594 20:45:15 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:57.594 20:45:15 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:57.594 20:45:15 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:57.594 20:45:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.594 20:45:15 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:57.594 20:45:15 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:57.594 20:45:15 -- pm/common@17 -- $ local monitor 00:02:57.594 20:45:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.594 20:45:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.594 20:45:15 -- pm/common@25 -- $ sleep 1 00:02:57.594 20:45:15 -- pm/common@21 -- $ date +%s 00:02:57.594 20:45:15 -- pm/common@21 -- $ date +%s 00:02:57.594 20:45:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732135515 00:02:57.594 20:45:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732135515 00:02:57.594 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732135515_collect-cpu-load.pm.log 00:02:57.594 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732135515_collect-vmstat.pm.log 00:02:58.538 20:45:16 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:58.538 20:45:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:58.538 20:45:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:58.538 20:45:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:58.538 20:45:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:58.538 Wed Nov 20 08:45:16 PM UTC 2024 00:02:58.538 20:45:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:58.538 v25.01-pre-219-g557f022f6 00:02:58.538 20:45:16 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:58.538 20:45:16 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:58.538 20:45:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:58.538 20:45:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:58.538 20:45:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.538 ************************************ 00:02:58.538 START TEST asan 00:02:58.538 ************************************ 00:02:58.538 using asan 00:02:58.538 20:45:16 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:58.538 00:02:58.538 real 0m0.000s 00:02:58.538 user 0m0.000s 00:02:58.538 sys 0m0.000s 00:02:58.538 20:45:16 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:58.538 20:45:16 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:58.538 ************************************ 00:02:58.538 END TEST asan 00:02:58.538 ************************************ 00:02:58.538 20:45:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:58.538 20:45:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:58.538 20:45:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:58.538 20:45:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:58.538 20:45:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.538 ************************************ 00:02:58.538 START TEST ubsan 00:02:58.538 ************************************ 00:02:58.538 using ubsan 00:02:58.538 20:45:16 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:58.538 00:02:58.538 real 0m0.000s 00:02:58.538 user 0m0.000s 00:02:58.538 sys 0m0.000s 00:02:58.538 20:45:16 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:58.538 20:45:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:58.538 ************************************ 00:02:58.538 END TEST ubsan 00:02:58.538 ************************************ 00:02:58.800 20:45:16 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:58.800 20:45:16 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:58.800 20:45:16 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:58.800 20:45:16 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:58.800 20:45:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:58.800 20:45:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.800 ************************************ 00:02:58.800 START TEST build_native_dpdk 00:02:58.800 ************************************ 00:02:58.800 20:45:16 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:58.800 caf0f5d395 version: 22.11.4 00:02:58.800 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:58.800 dc9c799c7d vhost: fix missing spinlock unlock 00:02:58.800 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:58.800 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:58.800 20:45:16 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:58.800 20:45:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:58.801 20:45:16 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:58.801 patching file config/rte_config.h 00:02:58.801 Hunk #1 succeeded at 60 (offset 1 line). 00:02:58.801 20:45:16 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:58.801 20:45:16 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:58.801 patching file lib/pcapng/rte_pcapng.c 00:02:58.801 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:58.801 20:45:16 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:58.801 20:45:16 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:58.801 20:45:16 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:58.801 20:45:16 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:58.801 20:45:16 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:58.801 20:45:16 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:58.801 20:45:16 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:03.009 The Meson build system 00:03:03.009 Version: 1.5.0 00:03:03.009 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:03.009 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:03.009 Build type: native build 00:03:03.009 Program cat found: YES (/usr/bin/cat) 00:03:03.009 Project name: DPDK 00:03:03.009 Project version: 22.11.4 00:03:03.009 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:03.009 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:03.009 Host machine cpu family: x86_64 00:03:03.009 Host machine cpu: x86_64 00:03:03.009 Message: ## Building in Developer Mode ## 00:03:03.009 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:03.009 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:03.009 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:03.009 Program objdump found: YES (/usr/bin/objdump) 00:03:03.009 Program python3 found: YES (/usr/bin/python3) 00:03:03.009 Program cat found: YES (/usr/bin/cat) 00:03:03.009 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:03.009 Checking for size of "void *" : 8 00:03:03.009 Checking for size of "void *" : 8 (cached) 00:03:03.009 Library m found: YES 00:03:03.009 Library numa found: YES 00:03:03.009 Has header "numaif.h" : YES 00:03:03.009 Library fdt found: NO 00:03:03.009 Library execinfo found: NO 00:03:03.009 Has header "execinfo.h" : YES 00:03:03.009 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:03.009 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:03.009 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:03.009 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:03.009 Run-time dependency openssl found: YES 3.1.1 00:03:03.009 Run-time dependency libpcap found: YES 1.10.4 00:03:03.009 Has header "pcap.h" with dependency libpcap: YES 00:03:03.009 Compiler for C supports arguments -Wcast-qual: YES 00:03:03.009 Compiler for C supports arguments -Wdeprecated: YES 00:03:03.009 Compiler for C supports arguments -Wformat: YES 00:03:03.009 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:03.009 Compiler for C supports arguments -Wformat-security: NO 00:03:03.009 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:03.009 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:03.009 Compiler for C supports arguments -Wnested-externs: YES 00:03:03.009 Compiler for C supports arguments -Wold-style-definition: YES 00:03:03.009 Compiler for C supports arguments -Wpointer-arith: YES 00:03:03.009 Compiler for C supports arguments -Wsign-compare: YES 00:03:03.009 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:03.009 Compiler for C supports arguments -Wundef: YES 00:03:03.009 Compiler for C supports arguments -Wwrite-strings: YES 00:03:03.009 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:03.009 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:03.009 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:03.009 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:03.009 Compiler for C supports arguments -mavx512f: YES 00:03:03.010 Checking if "AVX512 checking" compiles: YES 00:03:03.010 Fetching value of define "__SSE4_2__" : 1 00:03:03.010 Fetching value of define "__AES__" : 1 00:03:03.010 Fetching value of define "__AVX__" : 1 00:03:03.010 Fetching value of define "__AVX2__" : 1 00:03:03.010 Fetching value of define "__AVX512BW__" : 1 00:03:03.010 Fetching value of define "__AVX512CD__" : 1 00:03:03.010 Fetching value of define "__AVX512DQ__" : 1 00:03:03.010 Fetching value of define "__AVX512F__" : 1 00:03:03.010 Fetching value of define "__AVX512VL__" : 1 00:03:03.010 Fetching value of define "__PCLMUL__" : 1 00:03:03.010 Fetching value of define "__RDRND__" : 1 00:03:03.010 Fetching value of define "__RDSEED__" : 1 00:03:03.010 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:03.010 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:03.010 Message: lib/kvargs: Defining dependency "kvargs" 00:03:03.010 Message: lib/telemetry: Defining dependency "telemetry" 00:03:03.010 Checking for function "getentropy" : YES 00:03:03.010 Message: lib/eal: Defining dependency "eal" 00:03:03.010 Message: lib/ring: Defining dependency "ring" 00:03:03.010 Message: lib/rcu: Defining dependency "rcu" 00:03:03.010 Message: lib/mempool: Defining dependency "mempool" 00:03:03.010 Message: lib/mbuf: Defining dependency "mbuf" 00:03:03.010 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:03.010 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:03.010 Compiler for C supports arguments -mpclmul: YES 00:03:03.010 Compiler for C supports arguments -maes: YES 00:03:03.010 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:03.010 Compiler for C supports arguments -mavx512bw: YES 00:03:03.010 Compiler for C supports arguments -mavx512dq: YES 00:03:03.010 Compiler for C supports arguments -mavx512vl: YES 00:03:03.010 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:03.010 Compiler for C supports arguments -mavx2: YES 00:03:03.010 Compiler for C supports arguments -mavx: YES 00:03:03.010 Message: lib/net: Defining dependency "net" 00:03:03.010 Message: lib/meter: Defining dependency "meter" 00:03:03.010 Message: lib/ethdev: Defining dependency "ethdev" 00:03:03.010 Message: lib/pci: Defining dependency "pci" 00:03:03.010 Message: lib/cmdline: Defining dependency "cmdline" 00:03:03.010 Message: lib/metrics: Defining dependency "metrics" 00:03:03.010 Message: lib/hash: Defining dependency "hash" 00:03:03.010 Message: lib/timer: Defining dependency "timer" 00:03:03.010 Fetching value of define "__AVX2__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:03.010 Message: lib/acl: Defining dependency "acl" 00:03:03.010 Message: lib/bbdev: Defining dependency "bbdev" 00:03:03.010 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:03.010 Run-time dependency libelf found: YES 0.191 00:03:03.010 Message: lib/bpf: Defining dependency "bpf" 00:03:03.010 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:03.010 Message: lib/compressdev: Defining dependency "compressdev" 00:03:03.010 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:03.010 Message: lib/distributor: Defining dependency "distributor" 00:03:03.010 Message: lib/efd: Defining dependency "efd" 00:03:03.010 Message: lib/eventdev: Defining dependency "eventdev" 00:03:03.010 Message: lib/gpudev: Defining dependency "gpudev" 00:03:03.010 Message: lib/gro: Defining dependency "gro" 00:03:03.010 Message: lib/gso: Defining dependency "gso" 00:03:03.010 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:03.010 Message: lib/jobstats: Defining dependency "jobstats" 00:03:03.010 Message: lib/latencystats: Defining dependency "latencystats" 00:03:03.010 Message: lib/lpm: Defining dependency "lpm" 00:03:03.010 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512IFMA__" : 1 00:03:03.010 Message: lib/member: Defining dependency "member" 00:03:03.010 Message: lib/pcapng: Defining dependency "pcapng" 00:03:03.010 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:03.010 Message: lib/power: Defining dependency "power" 00:03:03.010 Message: lib/rawdev: Defining dependency "rawdev" 00:03:03.010 Message: lib/regexdev: Defining dependency "regexdev" 00:03:03.010 Message: lib/dmadev: Defining dependency "dmadev" 00:03:03.010 Message: lib/rib: Defining dependency "rib" 00:03:03.010 Message: lib/reorder: Defining dependency "reorder" 00:03:03.010 Message: lib/sched: Defining dependency "sched" 00:03:03.010 Message: lib/security: Defining dependency "security" 00:03:03.010 Message: lib/stack: Defining dependency "stack" 00:03:03.010 Has header "linux/userfaultfd.h" : YES 00:03:03.010 Message: lib/vhost: Defining dependency "vhost" 00:03:03.010 Message: lib/ipsec: Defining dependency "ipsec" 00:03:03.010 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:03.010 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:03.010 Message: lib/fib: Defining dependency "fib" 00:03:03.010 Message: lib/port: Defining dependency "port" 00:03:03.010 Message: lib/pdump: Defining dependency "pdump" 00:03:03.010 Message: lib/table: Defining dependency "table" 00:03:03.010 Message: lib/pipeline: Defining dependency "pipeline" 00:03:03.010 Message: lib/graph: Defining dependency "graph" 00:03:03.010 Message: lib/node: Defining dependency "node" 00:03:03.010 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:03.010 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:03.010 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:03.010 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:03.010 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:03.010 Compiler for C supports arguments -Wno-unused-value: YES 00:03:03.010 Compiler for C supports arguments -Wno-format: YES 00:03:03.010 Compiler for C supports arguments -Wno-format-security: YES 00:03:03.010 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:03.010 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:03.010 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:03.010 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:04.397 Fetching value of define "__AVX2__" : 1 (cached) 00:03:04.397 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:04.397 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:04.397 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:04.397 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:04.397 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:04.397 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:04.397 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:04.397 Configuring doxy-api.conf using configuration 00:03:04.397 Program sphinx-build found: NO 00:03:04.397 Configuring rte_build_config.h using configuration 00:03:04.397 Message: 00:03:04.397 ================= 00:03:04.397 Applications Enabled 00:03:04.397 ================= 00:03:04.397 00:03:04.397 apps: 00:03:04.397 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:04.397 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:04.397 test-security-perf, 00:03:04.397 00:03:04.397 Message: 00:03:04.397 ================= 00:03:04.397 Libraries Enabled 00:03:04.397 ================= 00:03:04.397 00:03:04.397 libs: 00:03:04.397 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:04.397 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:04.397 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:04.397 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:04.397 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:04.397 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:04.397 table, pipeline, graph, node, 00:03:04.397 00:03:04.397 Message: 00:03:04.397 =============== 00:03:04.397 Drivers Enabled 00:03:04.397 =============== 00:03:04.397 00:03:04.397 common: 00:03:04.397 00:03:04.397 bus: 00:03:04.397 pci, vdev, 00:03:04.397 mempool: 00:03:04.397 ring, 00:03:04.397 dma: 00:03:04.397 00:03:04.397 net: 00:03:04.397 i40e, 00:03:04.397 raw: 00:03:04.397 00:03:04.397 crypto: 00:03:04.397 00:03:04.397 compress: 00:03:04.397 00:03:04.397 regex: 00:03:04.397 00:03:04.397 vdpa: 00:03:04.397 00:03:04.397 event: 00:03:04.397 00:03:04.397 baseband: 00:03:04.397 00:03:04.397 gpu: 00:03:04.397 00:03:04.397 00:03:04.397 Message: 00:03:04.397 ================= 00:03:04.397 Content Skipped 00:03:04.397 ================= 00:03:04.397 00:03:04.397 apps: 00:03:04.397 00:03:04.397 libs: 00:03:04.397 kni: explicitly disabled via build config (deprecated lib) 00:03:04.397 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:04.397 00:03:04.397 drivers: 00:03:04.397 common/cpt: not in enabled drivers build config 00:03:04.397 common/dpaax: not in enabled drivers build config 00:03:04.397 common/iavf: not in enabled drivers build config 00:03:04.397 common/idpf: not in enabled drivers build config 00:03:04.397 common/mvep: not in enabled drivers build config 00:03:04.397 common/octeontx: not in enabled drivers build config 00:03:04.397 bus/auxiliary: not in enabled drivers build config 00:03:04.397 bus/dpaa: not in enabled drivers build config 00:03:04.397 bus/fslmc: not in enabled drivers build config 00:03:04.397 bus/ifpga: not in enabled drivers build config 00:03:04.398 bus/vmbus: not in enabled drivers build config 00:03:04.398 common/cnxk: not in enabled drivers build config 00:03:04.398 common/mlx5: not in enabled drivers build config 00:03:04.398 common/qat: not in enabled drivers build config 00:03:04.398 common/sfc_efx: not in enabled drivers build config 00:03:04.398 mempool/bucket: not in enabled drivers build config 00:03:04.398 mempool/cnxk: not in enabled drivers build config 00:03:04.398 mempool/dpaa: not in enabled drivers build config 00:03:04.398 mempool/dpaa2: not in enabled drivers build config 00:03:04.398 mempool/octeontx: not in enabled drivers build config 00:03:04.398 mempool/stack: not in enabled drivers build config 00:03:04.398 dma/cnxk: not in enabled drivers build config 00:03:04.398 dma/dpaa: not in enabled drivers build config 00:03:04.398 dma/dpaa2: not in enabled drivers build config 00:03:04.398 dma/hisilicon: not in enabled drivers build config 00:03:04.398 dma/idxd: not in enabled drivers build config 00:03:04.398 dma/ioat: not in enabled drivers build config 00:03:04.398 dma/skeleton: not in enabled drivers build config 00:03:04.398 net/af_packet: not in enabled drivers build config 00:03:04.398 net/af_xdp: not in enabled drivers build config 00:03:04.398 net/ark: not in enabled drivers build config 00:03:04.398 net/atlantic: not in enabled drivers build config 00:03:04.398 net/avp: not in enabled drivers build config 00:03:04.398 net/axgbe: not in enabled drivers build config 00:03:04.398 net/bnx2x: not in enabled drivers build config 00:03:04.398 net/bnxt: not in enabled drivers build config 00:03:04.398 net/bonding: not in enabled drivers build config 00:03:04.398 net/cnxk: not in enabled drivers build config 00:03:04.398 net/cxgbe: not in enabled drivers build config 00:03:04.398 net/dpaa: not in enabled drivers build config 00:03:04.398 net/dpaa2: not in enabled drivers build config 00:03:04.398 net/e1000: not in enabled drivers build config 00:03:04.398 net/ena: not in enabled drivers build config 00:03:04.398 net/enetc: not in enabled drivers build config 00:03:04.398 net/enetfec: not in enabled drivers build config 00:03:04.398 net/enic: not in enabled drivers build config 00:03:04.398 net/failsafe: not in enabled drivers build config 00:03:04.398 net/fm10k: not in enabled drivers build config 00:03:04.398 net/gve: not in enabled drivers build config 00:03:04.398 net/hinic: not in enabled drivers build config 00:03:04.398 net/hns3: not in enabled drivers build config 00:03:04.398 net/iavf: not in enabled drivers build config 00:03:04.398 net/ice: not in enabled drivers build config 00:03:04.398 net/idpf: not in enabled drivers build config 00:03:04.398 net/igc: not in enabled drivers build config 00:03:04.398 net/ionic: not in enabled drivers build config 00:03:04.398 net/ipn3ke: not in enabled drivers build config 00:03:04.398 net/ixgbe: not in enabled drivers build config 00:03:04.398 net/kni: not in enabled drivers build config 00:03:04.398 net/liquidio: not in enabled drivers build config 00:03:04.398 net/mana: not in enabled drivers build config 00:03:04.398 net/memif: not in enabled drivers build config 00:03:04.398 net/mlx4: not in enabled drivers build config 00:03:04.398 net/mlx5: not in enabled drivers build config 00:03:04.398 net/mvneta: not in enabled drivers build config 00:03:04.398 net/mvpp2: not in enabled drivers build config 00:03:04.398 net/netvsc: not in enabled drivers build config 00:03:04.398 net/nfb: not in enabled drivers build config 00:03:04.398 net/nfp: not in enabled drivers build config 00:03:04.398 net/ngbe: not in enabled drivers build config 00:03:04.398 net/null: not in enabled drivers build config 00:03:04.398 net/octeontx: not in enabled drivers build config 00:03:04.398 net/octeon_ep: not in enabled drivers build config 00:03:04.398 net/pcap: not in enabled drivers build config 00:03:04.398 net/pfe: not in enabled drivers build config 00:03:04.398 net/qede: not in enabled drivers build config 00:03:04.398 net/ring: not in enabled drivers build config 00:03:04.398 net/sfc: not in enabled drivers build config 00:03:04.398 net/softnic: not in enabled drivers build config 00:03:04.398 net/tap: not in enabled drivers build config 00:03:04.398 net/thunderx: not in enabled drivers build config 00:03:04.398 net/txgbe: not in enabled drivers build config 00:03:04.398 net/vdev_netvsc: not in enabled drivers build config 00:03:04.398 net/vhost: not in enabled drivers build config 00:03:04.398 net/virtio: not in enabled drivers build config 00:03:04.398 net/vmxnet3: not in enabled drivers build config 00:03:04.398 raw/cnxk_bphy: not in enabled drivers build config 00:03:04.398 raw/cnxk_gpio: not in enabled drivers build config 00:03:04.398 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:04.398 raw/ifpga: not in enabled drivers build config 00:03:04.398 raw/ntb: not in enabled drivers build config 00:03:04.398 raw/skeleton: not in enabled drivers build config 00:03:04.398 crypto/armv8: not in enabled drivers build config 00:03:04.398 crypto/bcmfs: not in enabled drivers build config 00:03:04.398 crypto/caam_jr: not in enabled drivers build config 00:03:04.398 crypto/ccp: not in enabled drivers build config 00:03:04.398 crypto/cnxk: not in enabled drivers build config 00:03:04.398 crypto/dpaa_sec: not in enabled drivers build config 00:03:04.398 crypto/dpaa2_sec: not in enabled drivers build config 00:03:04.398 crypto/ipsec_mb: not in enabled drivers build config 00:03:04.398 crypto/mlx5: not in enabled drivers build config 00:03:04.398 crypto/mvsam: not in enabled drivers build config 00:03:04.398 crypto/nitrox: not in enabled drivers build config 00:03:04.398 crypto/null: not in enabled drivers build config 00:03:04.398 crypto/octeontx: not in enabled drivers build config 00:03:04.398 crypto/openssl: not in enabled drivers build config 00:03:04.398 crypto/scheduler: not in enabled drivers build config 00:03:04.398 crypto/uadk: not in enabled drivers build config 00:03:04.398 crypto/virtio: not in enabled drivers build config 00:03:04.398 compress/isal: not in enabled drivers build config 00:03:04.398 compress/mlx5: not in enabled drivers build config 00:03:04.398 compress/octeontx: not in enabled drivers build config 00:03:04.398 compress/zlib: not in enabled drivers build config 00:03:04.398 regex/mlx5: not in enabled drivers build config 00:03:04.398 regex/cn9k: not in enabled drivers build config 00:03:04.398 vdpa/ifc: not in enabled drivers build config 00:03:04.398 vdpa/mlx5: not in enabled drivers build config 00:03:04.398 vdpa/sfc: not in enabled drivers build config 00:03:04.398 event/cnxk: not in enabled drivers build config 00:03:04.398 event/dlb2: not in enabled drivers build config 00:03:04.398 event/dpaa: not in enabled drivers build config 00:03:04.398 event/dpaa2: not in enabled drivers build config 00:03:04.398 event/dsw: not in enabled drivers build config 00:03:04.398 event/opdl: not in enabled drivers build config 00:03:04.398 event/skeleton: not in enabled drivers build config 00:03:04.398 event/sw: not in enabled drivers build config 00:03:04.398 event/octeontx: not in enabled drivers build config 00:03:04.398 baseband/acc: not in enabled drivers build config 00:03:04.398 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:04.398 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:04.398 baseband/la12xx: not in enabled drivers build config 00:03:04.398 baseband/null: not in enabled drivers build config 00:03:04.398 baseband/turbo_sw: not in enabled drivers build config 00:03:04.398 gpu/cuda: not in enabled drivers build config 00:03:04.398 00:03:04.398 00:03:04.398 Build targets in project: 309 00:03:04.398 00:03:04.398 DPDK 22.11.4 00:03:04.398 00:03:04.398 User defined options 00:03:04.398 libdir : lib 00:03:04.398 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:04.398 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:04.398 c_link_args : 00:03:04.398 enable_docs : false 00:03:04.398 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:04.398 enable_kmods : false 00:03:04.398 machine : native 00:03:04.398 tests : false 00:03:04.398 00:03:04.398 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:04.398 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:04.398 20:45:22 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:04.398 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:04.398 [1/738] Generating lib/rte_kvargs_mingw with a custom command 00:03:04.398 [2/738] Generating lib/rte_telemetry_def with a custom command 00:03:04.398 [3/738] Generating lib/rte_kvargs_def with a custom command 00:03:04.398 [4/738] Generating lib/rte_telemetry_mingw with a custom command 00:03:04.398 [5/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:04.657 [6/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:04.657 [7/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:04.657 [8/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:04.657 [9/738] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:04.657 [10/738] Linking static target lib/librte_kvargs.a 00:03:04.657 [11/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:04.657 [12/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.657 [13/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:04.657 [14/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:04.657 [15/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:04.657 [16/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:04.657 [17/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:04.657 [18/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:04.657 [19/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:04.657 [20/738] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.917 [21/738] Linking target lib/librte_kvargs.so.23.0 00:03:04.917 [22/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:04.917 [23/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:04.917 [24/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:04.917 [25/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:04.917 [26/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:04.917 [27/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:04.917 [28/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:04.917 [29/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:04.917 [30/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:04.917 [31/738] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:04.917 [32/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:04.917 [33/738] Linking static target lib/librte_telemetry.a 00:03:04.917 [34/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:04.917 [35/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:04.917 [36/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:04.917 [37/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:05.178 [38/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:05.178 [39/738] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:05.178 [40/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:05.178 [41/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:05.178 [42/738] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.178 [43/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:05.178 [44/738] Linking target lib/librte_telemetry.so.23.0 00:03:05.178 [45/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:05.439 [46/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:05.439 [47/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:05.439 [48/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:05.439 [49/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:05.439 [50/738] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:05.439 [51/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:05.439 [52/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:05.439 [53/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:05.439 [54/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:05.439 [55/738] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:05.439 [56/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:05.439 [57/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:05.439 [58/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:05.440 [59/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:05.440 [60/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:05.440 [61/738] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:05.440 [62/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:05.440 [63/738] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:05.440 [64/738] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:05.440 [65/738] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:05.700 [66/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:05.700 [67/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:05.700 [68/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:05.700 [69/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:05.700 [70/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:05.700 [71/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:05.700 [72/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:05.700 [73/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:05.700 [74/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:05.700 [75/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:05.700 [76/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:05.700 [77/738] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:05.700 [78/738] Generating lib/rte_eal_mingw with a custom command 00:03:05.700 [79/738] Generating lib/rte_eal_def with a custom command 00:03:05.700 [80/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:05.700 [81/738] Generating lib/rte_ring_def with a custom command 00:03:05.700 [82/738] Generating lib/rte_ring_mingw with a custom command 00:03:05.961 [83/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:05.961 [84/738] Generating lib/rte_rcu_def with a custom command 00:03:05.961 [85/738] Generating lib/rte_rcu_mingw with a custom command 00:03:05.961 [86/738] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:05.961 [87/738] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:05.961 [88/738] Linking static target lib/librte_ring.a 00:03:05.961 [89/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:05.961 [90/738] Generating lib/rte_mempool_def with a custom command 00:03:05.961 [91/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:05.961 [92/738] Generating lib/rte_mempool_mingw with a custom command 00:03:05.961 [93/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:06.222 [94/738] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.222 [95/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:06.222 [96/738] Generating lib/rte_mbuf_def with a custom command 00:03:06.222 [97/738] Generating lib/rte_mbuf_mingw with a custom command 00:03:06.222 [98/738] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:06.222 [99/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:06.222 [100/738] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:06.222 [101/738] Linking static target lib/librte_eal.a 00:03:06.484 [102/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:06.484 [103/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:06.484 [104/738] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:06.484 [105/738] Linking static target lib/librte_rcu.a 00:03:06.745 [106/738] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:06.745 [107/738] Linking static target lib/librte_mempool.a 00:03:06.745 [108/738] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:06.745 [109/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:06.745 [110/738] Generating lib/rte_net_def with a custom command 00:03:06.745 [111/738] Generating lib/rte_net_mingw with a custom command 00:03:06.745 [112/738] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:06.745 [113/738] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:06.745 [114/738] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:06.745 [115/738] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:06.745 [116/738] Generating lib/rte_meter_def with a custom command 00:03:06.745 [117/738] Generating lib/rte_meter_mingw with a custom command 00:03:06.745 [118/738] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:06.745 [119/738] Linking static target lib/librte_meter.a 00:03:07.006 [120/738] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.006 [121/738] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.268 [122/738] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:07.268 [123/738] Linking static target lib/librte_net.a 00:03:07.268 [124/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:07.268 [125/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:07.268 [126/738] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:07.268 [127/738] Linking static target lib/librte_mbuf.a 00:03:07.268 [128/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:07.268 [129/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:07.268 [130/738] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:07.529 [131/738] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.529 [132/738] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.529 [133/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:07.789 [134/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:07.789 [135/738] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.789 [136/738] Generating lib/rte_ethdev_def with a custom command 00:03:07.789 [137/738] Generating lib/rte_ethdev_mingw with a custom command 00:03:07.789 [138/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:07.789 [139/738] Generating lib/rte_pci_def with a custom command 00:03:07.789 [140/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:08.050 [141/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:08.050 [142/738] Generating lib/rte_pci_mingw with a custom command 00:03:08.050 [143/738] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:08.050 [144/738] Linking static target lib/librte_pci.a 00:03:08.050 [145/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:08.050 [146/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:08.050 [147/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:08.050 [148/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:08.050 [149/738] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.050 [150/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:08.050 [151/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:08.311 [152/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:08.311 [153/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:08.311 [154/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:08.311 [155/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:08.311 [156/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:08.311 [157/738] Generating lib/rte_cmdline_def with a custom command 00:03:08.311 [158/738] Generating lib/rte_cmdline_mingw with a custom command 00:03:08.311 [159/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:08.311 [160/738] Generating lib/rte_metrics_def with a custom command 00:03:08.311 [161/738] Generating lib/rte_metrics_mingw with a custom command 00:03:08.311 [162/738] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:08.311 [163/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:08.311 [164/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:08.311 [165/738] Generating lib/rte_hash_def with a custom command 00:03:08.311 [166/738] Generating lib/rte_hash_mingw with a custom command 00:03:08.572 [167/738] Generating lib/rte_timer_def with a custom command 00:03:08.572 [168/738] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:08.572 [169/738] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:08.572 [170/738] Linking static target lib/librte_cmdline.a 00:03:08.572 [171/738] Generating lib/rte_timer_mingw with a custom command 00:03:08.572 [172/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:08.572 [173/738] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:08.572 [174/738] Linking static target lib/librte_metrics.a 00:03:08.833 [175/738] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:08.833 [176/738] Linking static target lib/librte_timer.a 00:03:09.092 [177/738] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.092 [178/738] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:09.092 [179/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:09.092 [180/738] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.092 [181/738] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:09.350 [182/738] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.350 [183/738] Generating lib/rte_acl_def with a custom command 00:03:09.350 [184/738] Generating lib/rte_acl_mingw with a custom command 00:03:09.350 [185/738] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:09.350 [186/738] Generating lib/rte_bbdev_def with a custom command 00:03:09.350 [187/738] Generating lib/rte_bbdev_mingw with a custom command 00:03:09.350 [188/738] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:09.350 [189/738] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:09.350 [190/738] Linking static target lib/librte_ethdev.a 00:03:09.350 [191/738] Generating lib/rte_bitratestats_def with a custom command 00:03:09.350 [192/738] Generating lib/rte_bitratestats_mingw with a custom command 00:03:09.607 [193/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:09.607 [194/738] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:09.607 [195/738] Linking static target lib/librte_bitratestats.a 00:03:09.924 [196/738] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.924 [197/738] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:09.924 [198/738] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:09.924 [199/738] Linking static target lib/librte_bbdev.a 00:03:10.197 [200/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:10.197 [201/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:10.197 [202/738] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:10.197 [203/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:10.197 [204/738] Linking static target lib/librte_hash.a 00:03:10.457 [205/738] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.457 [206/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:10.718 [207/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:10.718 [208/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:10.718 [209/738] Generating lib/rte_bpf_def with a custom command 00:03:10.718 [210/738] Generating lib/rte_bpf_mingw with a custom command 00:03:10.978 [211/738] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.978 [212/738] Generating lib/rte_cfgfile_def with a custom command 00:03:10.978 [213/738] Generating lib/rte_cfgfile_mingw with a custom command 00:03:10.978 [214/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:10.978 [215/738] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:10.978 [216/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:10.978 [217/738] Linking static target lib/librte_cfgfile.a 00:03:11.239 [218/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:11.239 [219/738] Generating lib/rte_compressdev_def with a custom command 00:03:11.239 [220/738] Generating lib/rte_compressdev_mingw with a custom command 00:03:11.239 [221/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:11.239 [222/738] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:11.239 [223/738] Linking static target lib/librte_acl.a 00:03:11.239 [224/738] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:11.239 [225/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:11.239 [226/738] Linking static target lib/librte_bpf.a 00:03:11.239 [227/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:11.239 [228/738] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.239 [229/738] Generating lib/rte_cryptodev_def with a custom command 00:03:11.239 [230/738] Generating lib/rte_cryptodev_mingw with a custom command 00:03:11.500 [231/738] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.500 [232/738] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.500 [233/738] Generating lib/rte_distributor_def with a custom command 00:03:11.500 [234/738] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:11.500 [235/738] Generating lib/rte_distributor_mingw with a custom command 00:03:11.500 [236/738] Linking static target lib/librte_compressdev.a 00:03:11.500 [237/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:11.500 [238/738] Generating lib/rte_efd_def with a custom command 00:03:11.500 [239/738] Generating lib/rte_efd_mingw with a custom command 00:03:11.761 [240/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:11.761 [241/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:11.761 [242/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:11.761 [243/738] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:12.020 [244/738] Linking static target lib/librte_distributor.a 00:03:12.020 [245/738] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.020 [246/738] Linking target lib/librte_eal.so.23.0 00:03:12.020 [247/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:12.020 [248/738] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:12.020 [249/738] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:12.020 [250/738] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.020 [251/738] Linking target lib/librte_ring.so.23.0 00:03:12.020 [252/738] Linking target lib/librte_meter.so.23.0 00:03:12.278 [253/738] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.278 [254/738] Linking target lib/librte_pci.so.23.0 00:03:12.278 [255/738] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:12.278 [256/738] Linking target lib/librte_rcu.so.23.0 00:03:12.278 [257/738] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:12.278 [258/738] Linking target lib/librte_mempool.so.23.0 00:03:12.278 [259/738] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:12.278 [260/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:12.278 [261/738] Linking target lib/librte_timer.so.23.0 00:03:12.278 [262/738] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:12.278 [263/738] Linking target lib/librte_acl.so.23.0 00:03:12.278 [264/738] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:12.278 [265/738] Linking target lib/librte_cfgfile.so.23.0 00:03:12.278 [266/738] Linking target lib/librte_mbuf.so.23.0 00:03:12.537 [267/738] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:12.537 [268/738] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:12.537 [269/738] Generating lib/rte_eventdev_def with a custom command 00:03:12.537 [270/738] Generating lib/rte_eventdev_mingw with a custom command 00:03:12.537 [271/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:12.537 [272/738] Generating lib/rte_gpudev_def with a custom command 00:03:12.537 [273/738] Generating lib/rte_gpudev_mingw with a custom command 00:03:12.537 [274/738] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:12.537 [275/738] Linking target lib/librte_net.so.23.0 00:03:12.537 [276/738] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:12.537 [277/738] Linking target lib/librte_cmdline.so.23.0 00:03:12.537 [278/738] Linking target lib/librte_hash.so.23.0 00:03:12.795 [279/738] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:12.795 [280/738] Linking target lib/librte_bbdev.so.23.0 00:03:12.795 [281/738] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:12.795 [282/738] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:12.795 [283/738] Linking static target lib/librte_cryptodev.a 00:03:12.795 [284/738] Linking target lib/librte_distributor.so.23.0 00:03:12.795 [285/738] Linking target lib/librte_compressdev.so.23.0 00:03:12.795 [286/738] Linking static target lib/librte_efd.a 00:03:12.795 [287/738] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.795 [288/738] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:12.795 [289/738] Linking static target lib/librte_gpudev.a 00:03:13.054 [290/738] Linking target lib/librte_ethdev.so.23.0 00:03:13.054 [291/738] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.054 [292/738] Linking target lib/librte_efd.so.23.0 00:03:13.054 [293/738] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:13.054 [294/738] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:13.054 [295/738] Linking target lib/librte_metrics.so.23.0 00:03:13.054 [296/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:13.054 [297/738] Linking target lib/librte_bpf.so.23.0 00:03:13.054 [298/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:13.054 [299/738] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:13.313 [300/738] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:13.313 [301/738] Linking target lib/librte_bitratestats.so.23.0 00:03:13.313 [302/738] Generating lib/rte_gro_def with a custom command 00:03:13.313 [303/738] Generating lib/rte_gro_mingw with a custom command 00:03:13.313 [304/738] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:13.572 [305/738] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.572 [306/738] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:13.572 [307/738] Linking target lib/librte_gpudev.so.23.0 00:03:13.572 [308/738] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:13.572 [309/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:13.572 [310/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:13.572 [311/738] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:13.572 [312/738] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:13.572 [313/738] Generating lib/rte_gso_def with a custom command 00:03:13.572 [314/738] Generating lib/rte_gso_mingw with a custom command 00:03:13.572 [315/738] Linking static target lib/librte_eventdev.a 00:03:13.572 [316/738] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:13.572 [317/738] Linking static target lib/librte_gro.a 00:03:13.572 [318/738] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:13.830 [319/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:13.830 [320/738] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.830 [321/738] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:13.830 [322/738] Linking target lib/librte_gro.so.23.0 00:03:13.830 [323/738] Generating lib/rte_ip_frag_def with a custom command 00:03:13.830 [324/738] Generating lib/rte_ip_frag_mingw with a custom command 00:03:13.830 [325/738] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:13.830 [326/738] Linking static target lib/librte_gso.a 00:03:13.830 [327/738] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:13.830 [328/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:13.830 [329/738] Linking static target lib/librte_jobstats.a 00:03:14.088 [330/738] Generating lib/rte_jobstats_def with a custom command 00:03:14.088 [331/738] Generating lib/rte_jobstats_mingw with a custom command 00:03:14.088 [332/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:14.088 [333/738] Generating lib/rte_latencystats_def with a custom command 00:03:14.088 [334/738] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.088 [335/738] Generating lib/rte_latencystats_mingw with a custom command 00:03:14.088 [336/738] Linking target lib/librte_gso.so.23.0 00:03:14.088 [337/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:14.088 [338/738] Generating lib/rte_lpm_def with a custom command 00:03:14.088 [339/738] Generating lib/rte_lpm_mingw with a custom command 00:03:14.088 [340/738] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.088 [341/738] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.088 [342/738] Linking target lib/librte_cryptodev.so.23.0 00:03:14.088 [343/738] Linking target lib/librte_jobstats.so.23.0 00:03:14.088 [344/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:14.088 [345/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:14.346 [346/738] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:14.346 [347/738] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:14.346 [348/738] Linking static target lib/librte_ip_frag.a 00:03:14.346 [349/738] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:14.346 [350/738] Linking static target lib/librte_latencystats.a 00:03:14.346 [351/738] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:14.605 [352/738] Generating lib/rte_member_def with a custom command 00:03:14.605 [353/738] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.605 [354/738] Generating lib/rte_member_mingw with a custom command 00:03:14.605 [355/738] Linking target lib/librte_ip_frag.so.23.0 00:03:14.605 [356/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:14.605 [357/738] Generating lib/rte_pcapng_def with a custom command 00:03:14.605 [358/738] Generating lib/rte_pcapng_mingw with a custom command 00:03:14.605 [359/738] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.605 [360/738] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:14.605 [361/738] Linking target lib/librte_latencystats.so.23.0 00:03:14.605 [362/738] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:14.605 [363/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:14.605 [364/738] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:14.605 [365/738] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:14.863 [366/738] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:14.863 [367/738] Linking static target lib/librte_lpm.a 00:03:14.863 [368/738] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:14.863 [369/738] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:14.863 [370/738] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:14.863 [371/738] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.863 [372/738] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:14.863 [373/738] Generating lib/rte_power_def with a custom command 00:03:14.863 [374/738] Linking target lib/librte_eventdev.so.23.0 00:03:15.122 [375/738] Generating lib/rte_power_mingw with a custom command 00:03:15.122 [376/738] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:15.122 [377/738] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:15.122 [378/738] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.122 [379/738] Linking static target lib/librte_pcapng.a 00:03:15.122 [380/738] Generating lib/rte_rawdev_def with a custom command 00:03:15.122 [381/738] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:15.122 [382/738] Generating lib/rte_rawdev_mingw with a custom command 00:03:15.122 [383/738] Linking target lib/librte_lpm.so.23.0 00:03:15.122 [384/738] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:15.122 [385/738] Generating lib/rte_regexdev_def with a custom command 00:03:15.122 [386/738] Generating lib/rte_regexdev_mingw with a custom command 00:03:15.122 [387/738] Generating lib/rte_dmadev_def with a custom command 00:03:15.122 [388/738] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:15.122 [389/738] Generating lib/rte_dmadev_mingw with a custom command 00:03:15.122 [390/738] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:15.122 [391/738] Generating lib/rte_rib_def with a custom command 00:03:15.122 [392/738] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:15.122 [393/738] Generating lib/rte_rib_mingw with a custom command 00:03:15.122 [394/738] Generating lib/rte_reorder_def with a custom command 00:03:15.122 [395/738] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:15.381 [396/738] Linking static target lib/librte_rawdev.a 00:03:15.381 [397/738] Generating lib/rte_reorder_mingw with a custom command 00:03:15.381 [398/738] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.381 [399/738] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:15.381 [400/738] Linking target lib/librte_pcapng.so.23.0 00:03:15.381 [401/738] Linking static target lib/librte_power.a 00:03:15.381 [402/738] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:15.381 [403/738] Linking static target lib/librte_dmadev.a 00:03:15.381 [404/738] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:15.639 [405/738] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:15.639 [406/738] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:15.639 [407/738] Linking static target lib/librte_regexdev.a 00:03:15.639 [408/738] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:15.639 [409/738] Linking static target lib/librte_member.a 00:03:15.639 [410/738] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.639 [411/738] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:15.639 [412/738] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:15.639 [413/738] Linking static target lib/librte_reorder.a 00:03:15.639 [414/738] Linking target lib/librte_rawdev.so.23.0 00:03:15.639 [415/738] Generating lib/rte_sched_def with a custom command 00:03:15.639 [416/738] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:15.639 [417/738] Generating lib/rte_sched_mingw with a custom command 00:03:15.639 [418/738] Generating lib/rte_security_def with a custom command 00:03:15.639 [419/738] Generating lib/rte_security_mingw with a custom command 00:03:15.639 [420/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:15.639 [421/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:15.639 [422/738] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.639 [423/738] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.899 [424/738] Linking target lib/librte_reorder.so.23.0 00:03:15.899 [425/738] Linking target lib/librte_dmadev.so.23.0 00:03:15.899 [426/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:15.899 [427/738] Generating lib/rte_stack_def with a custom command 00:03:15.899 [428/738] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.899 [429/738] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:15.899 [430/738] Linking static target lib/librte_stack.a 00:03:15.899 [431/738] Generating lib/rte_stack_mingw with a custom command 00:03:15.899 [432/738] Linking target lib/librte_member.so.23.0 00:03:15.899 [433/738] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:15.899 [434/738] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:15.899 [435/738] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:15.899 [436/738] Linking static target lib/librte_rib.a 00:03:15.899 [437/738] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.899 [438/738] Linking target lib/librte_stack.so.23.0 00:03:15.899 [439/738] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.161 [440/738] Linking target lib/librte_power.so.23.0 00:03:16.161 [441/738] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.161 [442/738] Linking target lib/librte_regexdev.so.23.0 00:03:16.161 [443/738] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:16.161 [444/738] Linking static target lib/librte_security.a 00:03:16.161 [445/738] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.161 [446/738] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:16.161 [447/738] Generating lib/rte_vhost_def with a custom command 00:03:16.161 [448/738] Linking target lib/librte_rib.so.23.0 00:03:16.161 [449/738] Generating lib/rte_vhost_mingw with a custom command 00:03:16.422 [450/738] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:16.422 [451/738] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:16.422 [452/738] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:16.422 [453/738] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.422 [454/738] Linking target lib/librte_security.so.23.0 00:03:16.682 [455/738] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:16.682 [456/738] Linking static target lib/librte_sched.a 00:03:16.682 [457/738] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:16.682 [458/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:16.682 [459/738] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:16.682 [460/738] Generating lib/rte_ipsec_def with a custom command 00:03:16.943 [461/738] Generating lib/rte_ipsec_mingw with a custom command 00:03:16.943 [462/738] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.943 [463/738] Linking target lib/librte_sched.so.23.0 00:03:16.943 [464/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:16.943 [465/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:16.943 [466/738] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:16.943 [467/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:16.943 [468/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:17.204 [469/738] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:17.204 [470/738] Generating lib/rte_fib_def with a custom command 00:03:17.204 [471/738] Generating lib/rte_fib_mingw with a custom command 00:03:17.204 [472/738] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:17.463 [473/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:17.463 [474/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:17.463 [475/738] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:17.720 [476/738] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:17.720 [477/738] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:17.720 [478/738] Linking static target lib/librte_ipsec.a 00:03:17.720 [479/738] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:17.720 [480/738] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:17.720 [481/738] Linking static target lib/librte_fib.a 00:03:17.978 [482/738] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:17.978 [483/738] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:17.978 [484/738] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:17.978 [485/738] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.978 [486/738] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.978 [487/738] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:17.978 [488/738] Linking target lib/librte_fib.so.23.0 00:03:17.978 [489/738] Linking target lib/librte_ipsec.so.23.0 00:03:18.236 [490/738] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:18.493 [491/738] Generating lib/rte_port_def with a custom command 00:03:18.493 [492/738] Generating lib/rte_port_mingw with a custom command 00:03:18.493 [493/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:18.493 [494/738] Generating lib/rte_pdump_def with a custom command 00:03:18.493 [495/738] Generating lib/rte_pdump_mingw with a custom command 00:03:18.493 [496/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:18.493 [497/738] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:18.493 [498/738] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:18.493 [499/738] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:18.750 [500/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:18.750 [501/738] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:18.750 [502/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:18.750 [503/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:19.007 [504/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:19.007 [505/738] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:19.007 [506/738] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:19.007 [507/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:19.007 [508/738] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:19.007 [509/738] Linking static target lib/librte_pdump.a 00:03:19.265 [510/738] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:19.265 [511/738] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.265 [512/738] Linking target lib/librte_pdump.so.23.0 00:03:19.265 [513/738] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:19.265 [514/738] Linking static target lib/librte_port.a 00:03:19.523 [515/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:19.523 [516/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:19.523 [517/738] Generating lib/rte_table_def with a custom command 00:03:19.523 [518/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:19.523 [519/738] Generating lib/rte_table_mingw with a custom command 00:03:19.523 [520/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:19.523 [521/738] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:19.780 [522/738] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.780 [523/738] Linking target lib/librte_port.so.23.0 00:03:19.780 [524/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:19.780 [525/738] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:19.780 [526/738] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:19.780 [527/738] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:19.780 [528/738] Generating lib/rte_pipeline_def with a custom command 00:03:19.780 [529/738] Linking static target lib/librte_table.a 00:03:19.780 [530/738] Generating lib/rte_pipeline_mingw with a custom command 00:03:20.038 [531/738] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:20.038 [532/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:20.038 [533/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:20.295 [534/738] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.295 [535/738] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:20.295 [536/738] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:20.295 [537/738] Linking target lib/librte_table.so.23.0 00:03:20.295 [538/738] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:20.295 [539/738] Generating lib/rte_graph_def with a custom command 00:03:20.295 [540/738] Generating lib/rte_graph_mingw with a custom command 00:03:20.295 [541/738] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:20.295 [542/738] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:20.553 [543/738] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:20.810 [544/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:20.810 [545/738] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:20.810 [546/738] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:20.810 [547/738] Linking static target lib/librte_graph.a 00:03:20.810 [548/738] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:20.810 [549/738] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:20.810 [550/738] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:21.120 [551/738] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:21.120 [552/738] Generating lib/rte_node_def with a custom command 00:03:21.120 [553/738] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:21.120 [554/738] Generating lib/rte_node_mingw with a custom command 00:03:21.120 [555/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:21.120 [556/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:21.120 [557/738] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:21.391 [558/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:21.391 [559/738] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:21.391 [560/738] Generating drivers/rte_bus_pci_def with a custom command 00:03:21.391 [561/738] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:21.391 [562/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:21.391 [563/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:21.391 [564/738] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.391 [565/738] Generating drivers/rte_bus_vdev_def with a custom command 00:03:21.391 [566/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:21.391 [567/738] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:21.391 [568/738] Linking target lib/librte_graph.so.23.0 00:03:21.391 [569/738] Generating drivers/rte_mempool_ring_def with a custom command 00:03:21.391 [570/738] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:21.391 [571/738] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:21.391 [572/738] Linking static target lib/librte_node.a 00:03:21.391 [573/738] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:21.391 [574/738] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:21.391 [575/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:21.651 [576/738] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:21.651 [577/738] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:21.651 [578/738] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:21.651 [579/738] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:21.651 [580/738] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.651 [581/738] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:21.651 [582/738] Linking static target drivers/librte_bus_vdev.a 00:03:21.651 [583/738] Linking target lib/librte_node.so.23.0 00:03:21.651 [584/738] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:21.651 [585/738] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:21.651 [586/738] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.912 [587/738] Linking static target drivers/librte_bus_pci.a 00:03:21.912 [588/738] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.912 [589/738] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.912 [590/738] Linking target drivers/librte_bus_vdev.so.23.0 00:03:21.912 [591/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:22.173 [592/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:22.173 [593/738] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:22.173 [594/738] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.173 [595/738] Linking target drivers/librte_bus_pci.so.23.0 00:03:22.173 [596/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:22.173 [597/738] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:22.434 [598/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:22.434 [599/738] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:22.434 [600/738] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:22.695 [601/738] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:22.695 [602/738] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.695 [603/738] Linking static target drivers/librte_mempool_ring.a 00:03:22.695 [604/738] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:22.695 [605/738] Linking target drivers/librte_mempool_ring.so.23.0 00:03:22.695 [606/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:22.956 [607/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:23.217 [608/738] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:23.217 [609/738] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:23.477 [610/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:23.477 [611/738] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:23.477 [612/738] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:23.737 [613/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:23.737 [614/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:23.996 [615/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:23.996 [616/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:23.996 [617/738] Generating drivers/rte_net_i40e_def with a custom command 00:03:23.996 [618/738] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:23.996 [619/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:24.565 [620/738] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:24.824 [621/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:24.824 [622/738] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:24.824 [623/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:24.824 [624/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:25.084 [625/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:25.084 [626/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:25.343 [627/738] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:25.344 [628/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:25.344 [629/738] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:25.344 [630/738] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:25.344 [631/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:25.344 [632/738] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:25.344 [633/738] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:25.604 [634/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:25.604 [635/738] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:25.604 [636/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:25.604 [637/738] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.604 [638/738] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.604 [639/738] Linking static target drivers/librte_net_i40e.a 00:03:25.865 [640/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:25.865 [641/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:25.865 [642/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:26.126 [643/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:26.126 [644/738] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.126 [645/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:26.126 [646/738] Linking target drivers/librte_net_i40e.so.23.0 00:03:26.126 [647/738] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:26.387 [648/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:26.387 [649/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:26.387 [650/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:26.649 [651/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:26.649 [652/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:26.649 [653/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:26.649 [654/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:26.649 [655/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:26.910 [656/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:26.910 [657/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:26.910 [658/738] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:26.911 [659/738] Linking static target lib/librte_vhost.a 00:03:26.911 [660/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:26.911 [661/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:26.911 [662/738] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:26.911 [663/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:27.170 [664/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:27.170 [665/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:27.428 [666/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:27.687 [667/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:27.687 [668/738] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.687 [669/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:27.687 [670/738] Linking target lib/librte_vhost.so.23.0 00:03:27.687 [671/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:27.946 [672/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:27.946 [673/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:27.946 [674/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:27.946 [675/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:27.946 [676/738] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:28.205 [677/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:28.205 [678/738] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:28.205 [679/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:28.205 [680/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:28.205 [681/738] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:28.205 [682/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:28.205 [683/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:28.464 [684/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:28.464 [685/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:28.464 [686/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:28.464 [687/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:28.464 [688/738] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:28.723 [689/738] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:28.723 [690/738] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:28.981 [691/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:28.981 [692/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:28.981 [693/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:29.240 [694/738] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:29.240 [695/738] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:29.240 [696/738] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:29.499 [697/738] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:29.499 [698/738] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:29.499 [699/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:29.757 [700/738] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:29.757 [701/738] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:29.757 [702/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:30.016 [703/738] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:30.016 [704/738] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:30.016 [705/738] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:30.275 [706/738] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:30.275 [707/738] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:30.275 [708/738] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:30.533 [709/738] Linking static target lib/librte_pipeline.a 00:03:30.533 [710/738] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:30.533 [711/738] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:30.533 [712/738] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:30.792 [713/738] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:30.792 [714/738] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:30.792 [715/738] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:30.792 [716/738] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:30.792 [717/738] Linking target app/dpdk-dumpcap 00:03:30.792 [718/738] Linking target app/dpdk-pdump 00:03:30.792 [719/738] Linking target app/dpdk-proc-info 00:03:31.051 [720/738] Linking target app/dpdk-test-acl 00:03:31.051 [721/738] Linking target app/dpdk-test-bbdev 00:03:31.051 [722/738] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:31.051 [723/738] Linking target app/dpdk-test-cmdline 00:03:31.051 [724/738] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:31.051 [725/738] Linking target app/dpdk-test-compress-perf 00:03:31.051 [726/738] Linking target app/dpdk-test-crypto-perf 00:03:31.051 [727/738] Linking target app/dpdk-test-eventdev 00:03:31.051 [728/738] Linking target app/dpdk-test-fib 00:03:31.321 [729/738] Linking target app/dpdk-test-flow-perf 00:03:31.321 [730/738] Linking target app/dpdk-test-gpudev 00:03:31.321 [731/738] Linking target app/dpdk-test-pipeline 00:03:31.321 [732/738] Linking target app/dpdk-test-regex 00:03:31.321 [733/738] Linking target app/dpdk-test-sad 00:03:31.321 [734/738] Linking target app/dpdk-testpmd 00:03:31.898 [735/738] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:32.156 [736/738] Linking target app/dpdk-test-security-perf 00:03:33.530 [737/738] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.530 [738/738] Linking target lib/librte_pipeline.so.23.0 00:03:33.530 20:45:51 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:33.530 20:45:51 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:33.530 20:45:51 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:33.530 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:33.530 [0/1] Installing files. 00:03:33.790 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:33.790 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.791 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:33.792 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.793 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:33.794 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:33.795 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:33.795 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.795 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.056 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.056 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.056 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.056 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.056 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:34.057 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:34.057 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:34.057 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.057 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:34.057 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.057 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.057 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.057 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.057 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.057 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.057 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.057 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.058 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.059 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:34.060 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:34.061 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:34.061 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:34.061 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:34.061 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:34.061 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:34.061 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:34.061 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:34.061 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:34.061 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:34.061 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:34.061 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:34.061 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:34.061 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:34.061 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:34.061 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:34.061 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:34.061 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:34.061 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:34.061 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:34.061 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:34.061 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:34.061 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:34.061 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:34.061 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:34.061 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:34.061 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:34.061 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:34.061 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:34.061 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:34.061 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:34.061 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:34.061 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:34.061 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:34.061 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:34.061 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:34.061 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:34.061 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:34.061 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:34.061 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:34.061 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:34.061 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:34.061 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:34.061 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:34.061 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:34.061 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:34.061 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:34.061 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:34.061 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:34.061 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:34.061 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:34.061 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:34.061 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:34.061 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:34.061 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:34.061 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:34.061 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:34.061 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:34.061 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:34.061 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:34.061 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:34.061 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:34.061 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:34.061 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:34.061 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:34.061 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:34.061 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:34.061 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:34.061 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:34.061 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:34.061 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:34.061 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:34.061 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:34.061 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:34.061 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:34.061 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:34.061 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:34.061 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:34.061 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:34.061 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:34.061 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:34.061 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:34.061 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:34.061 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:34.061 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:34.061 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:34.062 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:34.062 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:34.062 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:34.062 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:34.062 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:34.062 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:34.062 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:34.062 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:34.062 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:34.062 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:34.062 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:34.062 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:34.062 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:34.062 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:34.062 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:34.062 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:34.062 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:34.062 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:34.062 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:34.062 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:34.062 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:34.062 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:34.062 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:34.062 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:34.062 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:34.062 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:34.062 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:34.062 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:34.062 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:34.062 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:34.062 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:34.062 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:34.062 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:34.062 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:34.062 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:34.062 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:34.062 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:34.062 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:34.062 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:34.062 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:34.371 ************************************ 00:03:34.371 END TEST build_native_dpdk 00:03:34.371 ************************************ 00:03:34.371 20:45:52 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:34.371 20:45:52 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:34.371 00:03:34.371 real 0m35.546s 00:03:34.371 user 3m51.140s 00:03:34.371 sys 0m37.639s 00:03:34.371 20:45:52 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:34.371 20:45:52 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:34.371 20:45:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:34.371 20:45:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:34.371 20:45:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:34.371 20:45:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:34.371 20:45:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:34.371 20:45:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:34.371 20:45:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:34.371 20:45:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:34.371 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:34.371 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:34.371 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:34.371 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:34.938 Using 'verbs' RDMA provider 00:03:45.845 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:55.928 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:55.928 Creating mk/config.mk...done. 00:03:55.928 Creating mk/cc.flags.mk...done. 00:03:55.928 Type 'make' to build. 00:03:55.928 20:46:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:55.928 20:46:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:55.928 20:46:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:55.928 20:46:13 -- common/autotest_common.sh@10 -- $ set +x 00:03:55.928 ************************************ 00:03:55.928 START TEST make 00:03:55.928 ************************************ 00:03:55.928 20:46:13 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:55.928 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:55.928 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:55.928 meson setup builddir \ 00:03:55.928 -Dwith-libaio=enabled \ 00:03:55.928 -Dwith-liburing=enabled \ 00:03:55.928 -Dwith-libvfn=disabled \ 00:03:55.929 -Dwith-spdk=disabled \ 00:03:55.929 -Dexamples=false \ 00:03:55.929 -Dtests=false \ 00:03:55.929 -Dtools=false && \ 00:03:55.929 meson compile -C builddir && \ 00:03:55.929 cd -) 00:03:55.929 make[1]: Nothing to be done for 'all'. 00:03:57.842 The Meson build system 00:03:57.842 Version: 1.5.0 00:03:57.842 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:57.842 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:57.842 Build type: native build 00:03:57.842 Project name: xnvme 00:03:57.842 Project version: 0.7.5 00:03:57.842 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:57.842 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:57.842 Host machine cpu family: x86_64 00:03:57.842 Host machine cpu: x86_64 00:03:57.842 Message: host_machine.system: linux 00:03:57.842 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:57.842 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:57.842 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:57.842 Run-time dependency threads found: YES 00:03:57.842 Has header "setupapi.h" : NO 00:03:57.842 Has header "linux/blkzoned.h" : YES 00:03:57.842 Has header "linux/blkzoned.h" : YES (cached) 00:03:57.842 Has header "libaio.h" : YES 00:03:57.842 Library aio found: YES 00:03:57.842 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:57.842 Run-time dependency liburing found: YES 2.2 00:03:57.842 Dependency libvfn skipped: feature with-libvfn disabled 00:03:57.842 Found CMake: /usr/bin/cmake (3.27.7) 00:03:57.842 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:57.842 Subproject spdk : skipped: feature with-spdk disabled 00:03:57.842 Run-time dependency appleframeworks found: NO (tried framework) 00:03:57.842 Run-time dependency appleframeworks found: NO (tried framework) 00:03:57.842 Library rt found: YES 00:03:57.842 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:57.842 Configuring xnvme_config.h using configuration 00:03:57.842 Configuring xnvme.spec using configuration 00:03:57.842 Run-time dependency bash-completion found: YES 2.11 00:03:57.842 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:57.842 Program cp found: YES (/usr/bin/cp) 00:03:57.842 Build targets in project: 3 00:03:57.842 00:03:57.842 xnvme 0.7.5 00:03:57.842 00:03:57.842 Subprojects 00:03:57.842 spdk : NO Feature 'with-spdk' disabled 00:03:57.842 00:03:57.842 User defined options 00:03:57.842 examples : false 00:03:57.842 tests : false 00:03:57.842 tools : false 00:03:57.842 with-libaio : enabled 00:03:57.842 with-liburing: enabled 00:03:57.843 with-libvfn : disabled 00:03:57.843 with-spdk : disabled 00:03:57.843 00:03:57.843 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:58.104 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:58.104 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:58.365 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:58.365 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:58.365 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:58.365 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:58.365 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:58.365 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:58.365 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:58.365 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:58.365 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:58.365 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:58.365 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:58.365 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:58.365 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:58.365 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:58.365 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:58.365 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:58.365 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:58.365 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:58.626 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:58.626 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:58.626 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:58.626 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:58.626 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:58.626 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:58.626 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:58.626 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:58.626 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:58.626 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:58.626 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:58.626 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:58.626 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:58.626 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:58.626 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:58.626 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:58.626 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:58.626 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:58.626 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:58.626 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:58.626 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:58.626 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:58.626 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:58.626 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:58.626 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:58.626 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:58.626 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:58.626 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:58.626 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:58.626 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:58.626 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:58.626 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:58.626 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:58.626 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:58.888 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:58.888 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:58.888 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:58.888 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:58.888 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:58.888 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:58.888 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:58.888 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:58.888 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:58.888 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:58.888 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:58.888 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:58.888 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:58.888 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:58.888 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:58.888 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:59.151 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:59.151 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:59.151 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:59.151 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:59.412 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:59.412 [75/76] Linking static target lib/libxnvme.a 00:03:59.412 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:59.412 INFO: autodetecting backend as ninja 00:03:59.412 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:59.412 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:31.515 CC lib/log/log.o 00:04:31.515 CC lib/log/log_deprecated.o 00:04:31.515 CC lib/log/log_flags.o 00:04:31.515 CC lib/ut_mock/mock.o 00:04:31.515 CC lib/ut/ut.o 00:04:31.515 LIB libspdk_ut.a 00:04:31.515 SO libspdk_ut.so.2.0 00:04:31.515 LIB libspdk_ut_mock.a 00:04:31.515 LIB libspdk_log.a 00:04:31.515 SO libspdk_ut_mock.so.6.0 00:04:31.515 SO libspdk_log.so.7.1 00:04:31.515 SYMLINK libspdk_ut.so 00:04:31.515 SYMLINK libspdk_ut_mock.so 00:04:31.515 SYMLINK libspdk_log.so 00:04:31.515 CC lib/util/base64.o 00:04:31.515 CC lib/util/crc16.o 00:04:31.515 CC lib/util/bit_array.o 00:04:31.515 CC lib/util/crc32.o 00:04:31.515 CC lib/util/cpuset.o 00:04:31.515 CC lib/util/crc32c.o 00:04:31.515 CC lib/ioat/ioat.o 00:04:31.515 CC lib/dma/dma.o 00:04:31.515 CXX lib/trace_parser/trace.o 00:04:31.515 CC lib/vfio_user/host/vfio_user_pci.o 00:04:31.515 CC lib/vfio_user/host/vfio_user.o 00:04:31.515 CC lib/util/crc32_ieee.o 00:04:31.515 CC lib/util/crc64.o 00:04:31.515 CC lib/util/dif.o 00:04:31.515 CC lib/util/fd.o 00:04:31.515 LIB libspdk_dma.a 00:04:31.515 SO libspdk_dma.so.5.0 00:04:31.515 LIB libspdk_ioat.a 00:04:31.515 CC lib/util/fd_group.o 00:04:31.515 CC lib/util/file.o 00:04:31.515 CC lib/util/hexlify.o 00:04:31.515 SO libspdk_ioat.so.7.0 00:04:31.515 SYMLINK libspdk_dma.so 00:04:31.515 CC lib/util/iov.o 00:04:31.515 CC lib/util/math.o 00:04:31.515 SYMLINK libspdk_ioat.so 00:04:31.515 CC lib/util/net.o 00:04:31.515 LIB libspdk_vfio_user.a 00:04:31.515 CC lib/util/pipe.o 00:04:31.515 SO libspdk_vfio_user.so.5.0 00:04:31.515 CC lib/util/strerror_tls.o 00:04:31.515 CC lib/util/string.o 00:04:31.515 SYMLINK libspdk_vfio_user.so 00:04:31.515 CC lib/util/uuid.o 00:04:31.515 CC lib/util/xor.o 00:04:31.515 CC lib/util/zipf.o 00:04:31.515 CC lib/util/md5.o 00:04:31.515 LIB libspdk_util.a 00:04:31.515 SO libspdk_util.so.10.1 00:04:31.515 SYMLINK libspdk_util.so 00:04:31.515 LIB libspdk_trace_parser.a 00:04:31.515 SO libspdk_trace_parser.so.6.0 00:04:31.515 CC lib/json/json_parse.o 00:04:31.515 CC lib/json/json_util.o 00:04:31.515 CC lib/json/json_write.o 00:04:31.515 CC lib/rdma_utils/rdma_utils.o 00:04:31.515 CC lib/conf/conf.o 00:04:31.515 CC lib/idxd/idxd.o 00:04:31.515 CC lib/idxd/idxd_user.o 00:04:31.515 CC lib/vmd/vmd.o 00:04:31.515 SYMLINK libspdk_trace_parser.so 00:04:31.515 CC lib/env_dpdk/env.o 00:04:31.515 CC lib/env_dpdk/memory.o 00:04:31.515 CC lib/env_dpdk/pci.o 00:04:31.515 CC lib/vmd/led.o 00:04:31.515 CC lib/idxd/idxd_kernel.o 00:04:31.515 LIB libspdk_rdma_utils.a 00:04:31.515 SO libspdk_rdma_utils.so.1.0 00:04:31.515 LIB libspdk_conf.a 00:04:31.515 SO libspdk_conf.so.6.0 00:04:31.515 SYMLINK libspdk_rdma_utils.so 00:04:31.515 SYMLINK libspdk_conf.so 00:04:31.515 LIB libspdk_json.a 00:04:31.515 CC lib/env_dpdk/init.o 00:04:31.515 CC lib/env_dpdk/threads.o 00:04:31.515 SO libspdk_json.so.6.0 00:04:31.515 CC lib/env_dpdk/pci_ioat.o 00:04:31.515 CC lib/env_dpdk/pci_virtio.o 00:04:31.515 SYMLINK libspdk_json.so 00:04:31.515 CC lib/env_dpdk/pci_vmd.o 00:04:31.515 CC lib/env_dpdk/pci_idxd.o 00:04:31.515 CC lib/env_dpdk/pci_event.o 00:04:31.515 CC lib/rdma_provider/common.o 00:04:31.515 CC lib/env_dpdk/sigbus_handler.o 00:04:31.515 CC lib/env_dpdk/pci_dpdk.o 00:04:31.515 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:31.515 LIB libspdk_vmd.a 00:04:31.515 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:31.515 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:31.515 CC lib/jsonrpc/jsonrpc_server.o 00:04:31.515 SO libspdk_vmd.so.6.0 00:04:31.515 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:31.515 CC lib/jsonrpc/jsonrpc_client.o 00:04:31.515 SYMLINK libspdk_vmd.so 00:04:31.515 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:31.515 LIB libspdk_idxd.a 00:04:31.515 SO libspdk_idxd.so.12.1 00:04:31.515 LIB libspdk_rdma_provider.a 00:04:31.515 SYMLINK libspdk_idxd.so 00:04:31.515 SO libspdk_rdma_provider.so.7.0 00:04:31.515 SYMLINK libspdk_rdma_provider.so 00:04:31.515 LIB libspdk_jsonrpc.a 00:04:31.515 SO libspdk_jsonrpc.so.6.0 00:04:31.515 SYMLINK libspdk_jsonrpc.so 00:04:31.774 CC lib/rpc/rpc.o 00:04:32.033 LIB libspdk_rpc.a 00:04:32.033 SO libspdk_rpc.so.6.0 00:04:32.033 SYMLINK libspdk_rpc.so 00:04:32.033 LIB libspdk_env_dpdk.a 00:04:32.291 SO libspdk_env_dpdk.so.15.1 00:04:32.291 CC lib/trace/trace_flags.o 00:04:32.291 CC lib/trace/trace_rpc.o 00:04:32.291 CC lib/trace/trace.o 00:04:32.291 CC lib/keyring/keyring_rpc.o 00:04:32.291 CC lib/keyring/keyring.o 00:04:32.291 CC lib/notify/notify.o 00:04:32.291 CC lib/notify/notify_rpc.o 00:04:32.291 SYMLINK libspdk_env_dpdk.so 00:04:32.291 LIB libspdk_notify.a 00:04:32.549 SO libspdk_notify.so.6.0 00:04:32.549 LIB libspdk_keyring.a 00:04:32.549 SYMLINK libspdk_notify.so 00:04:32.549 LIB libspdk_trace.a 00:04:32.549 SO libspdk_keyring.so.2.0 00:04:32.549 SO libspdk_trace.so.11.0 00:04:32.549 SYMLINK libspdk_keyring.so 00:04:32.549 SYMLINK libspdk_trace.so 00:04:32.807 CC lib/sock/sock_rpc.o 00:04:32.807 CC lib/sock/sock.o 00:04:32.807 CC lib/thread/thread.o 00:04:32.807 CC lib/thread/iobuf.o 00:04:33.066 LIB libspdk_sock.a 00:04:33.066 SO libspdk_sock.so.10.0 00:04:33.066 SYMLINK libspdk_sock.so 00:04:33.327 CC lib/nvme/nvme_ctrlr.o 00:04:33.327 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:33.327 CC lib/nvme/nvme_ns.o 00:04:33.327 CC lib/nvme/nvme_pcie_common.o 00:04:33.327 CC lib/nvme/nvme_qpair.o 00:04:33.327 CC lib/nvme/nvme_fabric.o 00:04:33.327 CC lib/nvme/nvme_pcie.o 00:04:33.327 CC lib/nvme/nvme_ns_cmd.o 00:04:33.327 CC lib/nvme/nvme.o 00:04:34.268 CC lib/nvme/nvme_quirks.o 00:04:34.268 CC lib/nvme/nvme_transport.o 00:04:34.268 CC lib/nvme/nvme_discovery.o 00:04:34.268 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:34.268 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:34.268 CC lib/nvme/nvme_tcp.o 00:04:34.268 CC lib/nvme/nvme_opal.o 00:04:34.268 LIB libspdk_thread.a 00:04:34.268 SO libspdk_thread.so.11.0 00:04:34.268 SYMLINK libspdk_thread.so 00:04:34.268 CC lib/nvme/nvme_io_msg.o 00:04:34.528 CC lib/nvme/nvme_poll_group.o 00:04:34.528 CC lib/nvme/nvme_zns.o 00:04:34.528 CC lib/nvme/nvme_stubs.o 00:04:34.528 CC lib/nvme/nvme_auth.o 00:04:34.788 CC lib/nvme/nvme_cuse.o 00:04:34.788 CC lib/nvme/nvme_rdma.o 00:04:34.788 CC lib/accel/accel.o 00:04:34.788 CC lib/accel/accel_rpc.o 00:04:35.048 CC lib/blob/blobstore.o 00:04:35.048 CC lib/init/json_config.o 00:04:35.048 CC lib/accel/accel_sw.o 00:04:35.048 CC lib/virtio/virtio.o 00:04:35.310 CC lib/init/subsystem.o 00:04:35.310 CC lib/init/subsystem_rpc.o 00:04:35.577 CC lib/virtio/virtio_vhost_user.o 00:04:35.578 CC lib/init/rpc.o 00:04:35.578 CC lib/blob/request.o 00:04:35.578 CC lib/blob/zeroes.o 00:04:35.578 CC lib/fsdev/fsdev.o 00:04:35.578 LIB libspdk_init.a 00:04:35.578 SO libspdk_init.so.6.0 00:04:35.578 CC lib/blob/blob_bs_dev.o 00:04:35.578 CC lib/fsdev/fsdev_io.o 00:04:35.578 SYMLINK libspdk_init.so 00:04:35.578 CC lib/fsdev/fsdev_rpc.o 00:04:35.868 CC lib/virtio/virtio_vfio_user.o 00:04:35.868 CC lib/virtio/virtio_pci.o 00:04:35.868 CC lib/event/app.o 00:04:35.868 CC lib/event/reactor.o 00:04:35.868 CC lib/event/log_rpc.o 00:04:35.868 CC lib/event/app_rpc.o 00:04:36.130 LIB libspdk_virtio.a 00:04:36.130 CC lib/event/scheduler_static.o 00:04:36.130 SO libspdk_virtio.so.7.0 00:04:36.130 LIB libspdk_accel.a 00:04:36.130 LIB libspdk_fsdev.a 00:04:36.130 SO libspdk_accel.so.16.0 00:04:36.130 SO libspdk_fsdev.so.2.0 00:04:36.130 SYMLINK libspdk_virtio.so 00:04:36.130 LIB libspdk_nvme.a 00:04:36.130 SYMLINK libspdk_fsdev.so 00:04:36.130 SYMLINK libspdk_accel.so 00:04:36.130 SO libspdk_nvme.so.15.0 00:04:36.388 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:36.388 CC lib/bdev/bdev.o 00:04:36.388 CC lib/bdev/bdev_rpc.o 00:04:36.388 CC lib/bdev/bdev_zone.o 00:04:36.388 CC lib/bdev/part.o 00:04:36.388 CC lib/bdev/scsi_nvme.o 00:04:36.388 LIB libspdk_event.a 00:04:36.388 SO libspdk_event.so.14.0 00:04:36.388 SYMLINK libspdk_event.so 00:04:36.388 SYMLINK libspdk_nvme.so 00:04:36.956 LIB libspdk_fuse_dispatcher.a 00:04:36.956 SO libspdk_fuse_dispatcher.so.1.0 00:04:36.956 SYMLINK libspdk_fuse_dispatcher.so 00:04:38.341 LIB libspdk_blob.a 00:04:38.342 SO libspdk_blob.so.11.0 00:04:38.342 SYMLINK libspdk_blob.so 00:04:38.342 LIB libspdk_bdev.a 00:04:38.342 CC lib/blobfs/tree.o 00:04:38.342 CC lib/blobfs/blobfs.o 00:04:38.342 SO libspdk_bdev.so.17.0 00:04:38.342 CC lib/lvol/lvol.o 00:04:38.603 SYMLINK libspdk_bdev.so 00:04:38.603 CC lib/ublk/ublk.o 00:04:38.603 CC lib/ublk/ublk_rpc.o 00:04:38.603 CC lib/ftl/ftl_core.o 00:04:38.603 CC lib/ftl/ftl_init.o 00:04:38.603 CC lib/nvmf/ctrlr.o 00:04:38.603 CC lib/ftl/ftl_layout.o 00:04:38.603 CC lib/scsi/dev.o 00:04:38.603 CC lib/nbd/nbd.o 00:04:38.864 CC lib/nbd/nbd_rpc.o 00:04:38.864 CC lib/ftl/ftl_debug.o 00:04:38.864 CC lib/scsi/lun.o 00:04:38.864 CC lib/ftl/ftl_io.o 00:04:38.864 CC lib/ftl/ftl_sb.o 00:04:38.864 CC lib/ftl/ftl_l2p.o 00:04:38.864 CC lib/scsi/port.o 00:04:38.864 LIB libspdk_nbd.a 00:04:39.126 SO libspdk_nbd.so.7.0 00:04:39.126 SYMLINK libspdk_nbd.so 00:04:39.126 CC lib/scsi/scsi.o 00:04:39.126 CC lib/scsi/scsi_bdev.o 00:04:39.126 CC lib/scsi/scsi_pr.o 00:04:39.126 CC lib/scsi/scsi_rpc.o 00:04:39.126 CC lib/ftl/ftl_l2p_flat.o 00:04:39.126 CC lib/ftl/ftl_nv_cache.o 00:04:39.126 CC lib/scsi/task.o 00:04:39.126 CC lib/nvmf/ctrlr_discovery.o 00:04:39.126 CC lib/ftl/ftl_band.o 00:04:39.387 LIB libspdk_blobfs.a 00:04:39.387 SO libspdk_blobfs.so.10.0 00:04:39.387 LIB libspdk_ublk.a 00:04:39.387 CC lib/ftl/ftl_band_ops.o 00:04:39.387 CC lib/ftl/ftl_writer.o 00:04:39.387 SYMLINK libspdk_blobfs.so 00:04:39.387 CC lib/ftl/ftl_rq.o 00:04:39.387 SO libspdk_ublk.so.3.0 00:04:39.387 LIB libspdk_lvol.a 00:04:39.387 SO libspdk_lvol.so.10.0 00:04:39.387 SYMLINK libspdk_ublk.so 00:04:39.387 CC lib/ftl/ftl_reloc.o 00:04:39.387 SYMLINK libspdk_lvol.so 00:04:39.387 CC lib/ftl/ftl_l2p_cache.o 00:04:39.387 LIB libspdk_scsi.a 00:04:39.387 CC lib/ftl/ftl_p2l.o 00:04:39.387 SO libspdk_scsi.so.9.0 00:04:39.648 CC lib/nvmf/ctrlr_bdev.o 00:04:39.648 SYMLINK libspdk_scsi.so 00:04:39.648 CC lib/ftl/ftl_p2l_log.o 00:04:39.648 CC lib/nvmf/subsystem.o 00:04:39.648 CC lib/ftl/mngt/ftl_mngt.o 00:04:39.648 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:39.909 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:39.909 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:39.909 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:39.909 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:39.909 CC lib/nvmf/nvmf.o 00:04:39.909 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:39.909 CC lib/iscsi/conn.o 00:04:39.909 CC lib/iscsi/init_grp.o 00:04:40.168 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:40.169 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:40.169 CC lib/iscsi/iscsi.o 00:04:40.169 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:40.169 CC lib/nvmf/nvmf_rpc.o 00:04:40.169 CC lib/vhost/vhost.o 00:04:40.169 CC lib/vhost/vhost_rpc.o 00:04:40.429 CC lib/vhost/vhost_scsi.o 00:04:40.429 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:40.429 CC lib/nvmf/transport.o 00:04:40.429 CC lib/vhost/vhost_blk.o 00:04:40.429 CC lib/vhost/rte_vhost_user.o 00:04:40.429 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:40.690 CC lib/iscsi/param.o 00:04:40.951 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:40.951 CC lib/ftl/utils/ftl_conf.o 00:04:40.951 CC lib/ftl/utils/ftl_md.o 00:04:40.951 CC lib/nvmf/tcp.o 00:04:40.951 CC lib/iscsi/portal_grp.o 00:04:40.951 CC lib/nvmf/stubs.o 00:04:40.951 CC lib/iscsi/tgt_node.o 00:04:40.951 CC lib/iscsi/iscsi_subsystem.o 00:04:41.212 CC lib/ftl/utils/ftl_mempool.o 00:04:41.212 CC lib/nvmf/mdns_server.o 00:04:41.212 CC lib/iscsi/iscsi_rpc.o 00:04:41.212 CC lib/iscsi/task.o 00:04:41.212 CC lib/ftl/utils/ftl_bitmap.o 00:04:41.473 CC lib/ftl/utils/ftl_property.o 00:04:41.473 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:41.473 LIB libspdk_vhost.a 00:04:41.473 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:41.473 CC lib/nvmf/rdma.o 00:04:41.473 CC lib/nvmf/auth.o 00:04:41.473 SO libspdk_vhost.so.8.0 00:04:41.473 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:41.473 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:41.473 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:41.473 LIB libspdk_iscsi.a 00:04:41.732 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:41.732 SYMLINK libspdk_vhost.so 00:04:41.732 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:41.732 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:41.732 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:41.732 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:41.732 SO libspdk_iscsi.so.8.0 00:04:41.732 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:41.732 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:41.732 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:41.732 CC lib/ftl/base/ftl_base_dev.o 00:04:41.732 CC lib/ftl/base/ftl_base_bdev.o 00:04:41.992 SYMLINK libspdk_iscsi.so 00:04:41.992 CC lib/ftl/ftl_trace.o 00:04:41.992 LIB libspdk_ftl.a 00:04:42.253 SO libspdk_ftl.so.9.0 00:04:42.514 SYMLINK libspdk_ftl.so 00:04:43.085 LIB libspdk_nvmf.a 00:04:43.345 SO libspdk_nvmf.so.20.0 00:04:43.345 SYMLINK libspdk_nvmf.so 00:04:43.603 CC module/env_dpdk/env_dpdk_rpc.o 00:04:43.862 CC module/sock/posix/posix.o 00:04:43.862 CC module/fsdev/aio/fsdev_aio.o 00:04:43.862 CC module/keyring/file/keyring.o 00:04:43.862 CC module/keyring/linux/keyring.o 00:04:43.862 CC module/accel/ioat/accel_ioat.o 00:04:43.862 CC module/accel/error/accel_error.o 00:04:43.862 CC module/blob/bdev/blob_bdev.o 00:04:43.862 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:43.862 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:43.862 LIB libspdk_env_dpdk_rpc.a 00:04:43.862 SO libspdk_env_dpdk_rpc.so.6.0 00:04:43.862 CC module/keyring/linux/keyring_rpc.o 00:04:43.862 SYMLINK libspdk_env_dpdk_rpc.so 00:04:43.862 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:43.862 CC module/keyring/file/keyring_rpc.o 00:04:43.862 LIB libspdk_scheduler_dpdk_governor.a 00:04:43.862 CC module/accel/ioat/accel_ioat_rpc.o 00:04:43.862 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:43.862 CC module/accel/error/accel_error_rpc.o 00:04:43.862 LIB libspdk_scheduler_dynamic.a 00:04:43.862 LIB libspdk_blob_bdev.a 00:04:44.120 SO libspdk_scheduler_dynamic.so.4.0 00:04:44.120 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:44.120 SO libspdk_blob_bdev.so.11.0 00:04:44.120 LIB libspdk_keyring_file.a 00:04:44.120 LIB libspdk_keyring_linux.a 00:04:44.120 CC module/fsdev/aio/linux_aio_mgr.o 00:04:44.120 SO libspdk_keyring_file.so.2.0 00:04:44.120 LIB libspdk_accel_ioat.a 00:04:44.120 SO libspdk_keyring_linux.so.1.0 00:04:44.120 SYMLINK libspdk_scheduler_dynamic.so 00:04:44.120 SYMLINK libspdk_blob_bdev.so 00:04:44.120 SO libspdk_accel_ioat.so.6.0 00:04:44.120 LIB libspdk_accel_error.a 00:04:44.120 SYMLINK libspdk_keyring_file.so 00:04:44.120 SYMLINK libspdk_keyring_linux.so 00:04:44.120 SO libspdk_accel_error.so.2.0 00:04:44.120 SYMLINK libspdk_accel_ioat.so 00:04:44.120 SYMLINK libspdk_accel_error.so 00:04:44.120 CC module/scheduler/gscheduler/gscheduler.o 00:04:44.120 CC module/accel/dsa/accel_dsa.o 00:04:44.120 CC module/accel/iaa/accel_iaa.o 00:04:44.378 LIB libspdk_fsdev_aio.a 00:04:44.378 CC module/bdev/gpt/gpt.o 00:04:44.379 CC module/bdev/delay/vbdev_delay.o 00:04:44.379 CC module/bdev/error/vbdev_error.o 00:04:44.379 CC module/blobfs/bdev/blobfs_bdev.o 00:04:44.379 LIB libspdk_sock_posix.a 00:04:44.379 LIB libspdk_scheduler_gscheduler.a 00:04:44.379 SO libspdk_fsdev_aio.so.1.0 00:04:44.379 CC module/bdev/lvol/vbdev_lvol.o 00:04:44.379 SO libspdk_scheduler_gscheduler.so.4.0 00:04:44.379 SO libspdk_sock_posix.so.6.0 00:04:44.379 CC module/accel/iaa/accel_iaa_rpc.o 00:04:44.379 SYMLINK libspdk_fsdev_aio.so 00:04:44.379 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:44.379 SYMLINK libspdk_scheduler_gscheduler.so 00:04:44.379 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:44.379 SYMLINK libspdk_sock_posix.so 00:04:44.379 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:44.379 LIB libspdk_accel_iaa.a 00:04:44.379 CC module/bdev/gpt/vbdev_gpt.o 00:04:44.379 CC module/bdev/error/vbdev_error_rpc.o 00:04:44.379 SO libspdk_accel_iaa.so.3.0 00:04:44.636 CC module/accel/dsa/accel_dsa_rpc.o 00:04:44.636 SYMLINK libspdk_accel_iaa.so 00:04:44.636 LIB libspdk_blobfs_bdev.a 00:04:44.636 SO libspdk_blobfs_bdev.so.6.0 00:04:44.636 LIB libspdk_bdev_error.a 00:04:44.636 LIB libspdk_accel_dsa.a 00:04:44.636 SO libspdk_bdev_error.so.6.0 00:04:44.636 SYMLINK libspdk_blobfs_bdev.so 00:04:44.636 CC module/bdev/malloc/bdev_malloc.o 00:04:44.636 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:44.636 LIB libspdk_bdev_delay.a 00:04:44.636 SO libspdk_accel_dsa.so.5.0 00:04:44.636 CC module/bdev/nvme/bdev_nvme.o 00:04:44.636 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:44.636 CC module/bdev/null/bdev_null.o 00:04:44.636 SYMLINK libspdk_bdev_error.so 00:04:44.636 CC module/bdev/nvme/nvme_rpc.o 00:04:44.636 SO libspdk_bdev_delay.so.6.0 00:04:44.636 SYMLINK libspdk_accel_dsa.so 00:04:44.636 LIB libspdk_bdev_gpt.a 00:04:44.636 SYMLINK libspdk_bdev_delay.so 00:04:44.636 CC module/bdev/null/bdev_null_rpc.o 00:04:44.636 SO libspdk_bdev_gpt.so.6.0 00:04:44.894 CC module/bdev/nvme/bdev_mdns_client.o 00:04:44.894 SYMLINK libspdk_bdev_gpt.so 00:04:44.894 LIB libspdk_bdev_lvol.a 00:04:44.894 CC module/bdev/passthru/vbdev_passthru.o 00:04:44.894 SO libspdk_bdev_lvol.so.6.0 00:04:44.894 SYMLINK libspdk_bdev_lvol.so 00:04:44.894 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:44.894 CC module/bdev/nvme/vbdev_opal.o 00:04:44.894 LIB libspdk_bdev_null.a 00:04:44.894 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:44.894 CC module/bdev/raid/bdev_raid.o 00:04:44.894 SO libspdk_bdev_null.so.6.0 00:04:44.894 CC module/bdev/split/vbdev_split.o 00:04:44.894 LIB libspdk_bdev_malloc.a 00:04:45.154 SYMLINK libspdk_bdev_null.so 00:04:45.154 SO libspdk_bdev_malloc.so.6.0 00:04:45.154 CC module/bdev/split/vbdev_split_rpc.o 00:04:45.154 CC module/bdev/raid/bdev_raid_rpc.o 00:04:45.154 SYMLINK libspdk_bdev_malloc.so 00:04:45.154 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:45.154 LIB libspdk_bdev_passthru.a 00:04:45.154 SO libspdk_bdev_passthru.so.6.0 00:04:45.154 LIB libspdk_bdev_split.a 00:04:45.154 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:45.154 SO libspdk_bdev_split.so.6.0 00:04:45.154 SYMLINK libspdk_bdev_passthru.so 00:04:45.154 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:45.429 SYMLINK libspdk_bdev_split.so 00:04:45.429 CC module/bdev/raid/bdev_raid_sb.o 00:04:45.429 CC module/bdev/raid/raid0.o 00:04:45.429 CC module/bdev/xnvme/bdev_xnvme.o 00:04:45.429 CC module/bdev/ftl/bdev_ftl.o 00:04:45.429 CC module/bdev/aio/bdev_aio.o 00:04:45.429 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:45.429 CC module/bdev/iscsi/bdev_iscsi.o 00:04:45.429 CC module/bdev/raid/raid1.o 00:04:45.429 LIB libspdk_bdev_zone_block.a 00:04:45.429 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:45.688 SO libspdk_bdev_zone_block.so.6.0 00:04:45.688 CC module/bdev/raid/concat.o 00:04:45.688 LIB libspdk_bdev_xnvme.a 00:04:45.688 SO libspdk_bdev_xnvme.so.3.0 00:04:45.688 SYMLINK libspdk_bdev_zone_block.so 00:04:45.688 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:45.688 SYMLINK libspdk_bdev_xnvme.so 00:04:45.688 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:45.688 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:45.688 CC module/bdev/aio/bdev_aio_rpc.o 00:04:45.688 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:45.688 LIB libspdk_bdev_ftl.a 00:04:45.688 SO libspdk_bdev_ftl.so.6.0 00:04:45.688 LIB libspdk_bdev_iscsi.a 00:04:45.946 LIB libspdk_bdev_aio.a 00:04:45.946 SYMLINK libspdk_bdev_ftl.so 00:04:45.946 SO libspdk_bdev_iscsi.so.6.0 00:04:45.946 SO libspdk_bdev_aio.so.6.0 00:04:45.946 SYMLINK libspdk_bdev_iscsi.so 00:04:45.946 SYMLINK libspdk_bdev_aio.so 00:04:45.946 LIB libspdk_bdev_raid.a 00:04:45.946 SO libspdk_bdev_raid.so.6.0 00:04:46.204 SYMLINK libspdk_bdev_raid.so 00:04:46.204 LIB libspdk_bdev_virtio.a 00:04:46.204 SO libspdk_bdev_virtio.so.6.0 00:04:46.204 SYMLINK libspdk_bdev_virtio.so 00:04:46.775 LIB libspdk_bdev_nvme.a 00:04:46.775 SO libspdk_bdev_nvme.so.7.1 00:04:47.035 SYMLINK libspdk_bdev_nvme.so 00:04:47.293 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:47.293 CC module/event/subsystems/scheduler/scheduler.o 00:04:47.293 CC module/event/subsystems/sock/sock.o 00:04:47.293 CC module/event/subsystems/fsdev/fsdev.o 00:04:47.293 CC module/event/subsystems/iobuf/iobuf.o 00:04:47.293 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:47.293 CC module/event/subsystems/keyring/keyring.o 00:04:47.293 CC module/event/subsystems/vmd/vmd.o 00:04:47.293 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:47.553 LIB libspdk_event_sock.a 00:04:47.553 LIB libspdk_event_keyring.a 00:04:47.553 LIB libspdk_event_scheduler.a 00:04:47.553 SO libspdk_event_sock.so.5.0 00:04:47.553 LIB libspdk_event_fsdev.a 00:04:47.553 LIB libspdk_event_vhost_blk.a 00:04:47.553 SO libspdk_event_keyring.so.1.0 00:04:47.553 SO libspdk_event_fsdev.so.1.0 00:04:47.553 LIB libspdk_event_iobuf.a 00:04:47.553 SO libspdk_event_vhost_blk.so.3.0 00:04:47.553 SO libspdk_event_scheduler.so.4.0 00:04:47.553 LIB libspdk_event_vmd.a 00:04:47.553 SYMLINK libspdk_event_sock.so 00:04:47.553 SO libspdk_event_iobuf.so.3.0 00:04:47.553 SYMLINK libspdk_event_fsdev.so 00:04:47.553 SO libspdk_event_vmd.so.6.0 00:04:47.553 SYMLINK libspdk_event_keyring.so 00:04:47.553 SYMLINK libspdk_event_scheduler.so 00:04:47.553 SYMLINK libspdk_event_vhost_blk.so 00:04:47.553 SYMLINK libspdk_event_iobuf.so 00:04:47.553 SYMLINK libspdk_event_vmd.so 00:04:47.813 CC module/event/subsystems/accel/accel.o 00:04:47.813 LIB libspdk_event_accel.a 00:04:48.073 SO libspdk_event_accel.so.6.0 00:04:48.073 SYMLINK libspdk_event_accel.so 00:04:48.334 CC module/event/subsystems/bdev/bdev.o 00:04:48.334 LIB libspdk_event_bdev.a 00:04:48.334 SO libspdk_event_bdev.so.6.0 00:04:48.334 SYMLINK libspdk_event_bdev.so 00:04:48.595 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:48.595 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:48.595 CC module/event/subsystems/scsi/scsi.o 00:04:48.595 CC module/event/subsystems/ublk/ublk.o 00:04:48.595 CC module/event/subsystems/nbd/nbd.o 00:04:48.856 LIB libspdk_event_nbd.a 00:04:48.856 LIB libspdk_event_ublk.a 00:04:48.856 LIB libspdk_event_scsi.a 00:04:48.856 SO libspdk_event_ublk.so.3.0 00:04:48.856 SO libspdk_event_nbd.so.6.0 00:04:48.856 SO libspdk_event_scsi.so.6.0 00:04:48.856 SYMLINK libspdk_event_nbd.so 00:04:48.856 SYMLINK libspdk_event_ublk.so 00:04:48.856 SYMLINK libspdk_event_scsi.so 00:04:48.856 LIB libspdk_event_nvmf.a 00:04:48.856 SO libspdk_event_nvmf.so.6.0 00:04:48.856 SYMLINK libspdk_event_nvmf.so 00:04:49.117 CC module/event/subsystems/iscsi/iscsi.o 00:04:49.117 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:49.117 LIB libspdk_event_vhost_scsi.a 00:04:49.117 SO libspdk_event_vhost_scsi.so.3.0 00:04:49.117 LIB libspdk_event_iscsi.a 00:04:49.117 SO libspdk_event_iscsi.so.6.0 00:04:49.378 SYMLINK libspdk_event_vhost_scsi.so 00:04:49.378 SYMLINK libspdk_event_iscsi.so 00:04:49.378 SO libspdk.so.6.0 00:04:49.378 SYMLINK libspdk.so 00:04:49.638 CXX app/trace/trace.o 00:04:49.638 CC app/spdk_lspci/spdk_lspci.o 00:04:49.638 CC app/trace_record/trace_record.o 00:04:49.638 CC app/nvmf_tgt/nvmf_main.o 00:04:49.638 CC app/iscsi_tgt/iscsi_tgt.o 00:04:49.638 CC app/spdk_tgt/spdk_tgt.o 00:04:49.638 CC test/thread/poller_perf/poller_perf.o 00:04:49.638 CC examples/util/zipf/zipf.o 00:04:49.638 CC test/app/bdev_svc/bdev_svc.o 00:04:49.638 CC test/dma/test_dma/test_dma.o 00:04:49.638 LINK spdk_lspci 00:04:49.900 LINK nvmf_tgt 00:04:49.900 LINK poller_perf 00:04:49.900 LINK iscsi_tgt 00:04:49.900 LINK spdk_trace_record 00:04:49.900 LINK zipf 00:04:49.900 LINK spdk_tgt 00:04:49.900 LINK bdev_svc 00:04:49.900 LINK spdk_trace 00:04:49.900 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:49.900 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:49.900 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:49.900 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:50.162 TEST_HEADER include/spdk/accel.h 00:04:50.162 TEST_HEADER include/spdk/accel_module.h 00:04:50.162 TEST_HEADER include/spdk/assert.h 00:04:50.162 TEST_HEADER include/spdk/barrier.h 00:04:50.162 CC test/app/histogram_perf/histogram_perf.o 00:04:50.162 TEST_HEADER include/spdk/base64.h 00:04:50.162 TEST_HEADER include/spdk/bdev.h 00:04:50.162 TEST_HEADER include/spdk/bdev_module.h 00:04:50.162 TEST_HEADER include/spdk/bdev_zone.h 00:04:50.162 TEST_HEADER include/spdk/bit_array.h 00:04:50.162 TEST_HEADER include/spdk/bit_pool.h 00:04:50.162 TEST_HEADER include/spdk/blob_bdev.h 00:04:50.162 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:50.162 TEST_HEADER include/spdk/blobfs.h 00:04:50.162 TEST_HEADER include/spdk/blob.h 00:04:50.162 TEST_HEADER include/spdk/conf.h 00:04:50.162 TEST_HEADER include/spdk/config.h 00:04:50.162 TEST_HEADER include/spdk/cpuset.h 00:04:50.162 TEST_HEADER include/spdk/crc16.h 00:04:50.162 TEST_HEADER include/spdk/crc32.h 00:04:50.162 TEST_HEADER include/spdk/crc64.h 00:04:50.162 TEST_HEADER include/spdk/dif.h 00:04:50.162 TEST_HEADER include/spdk/dma.h 00:04:50.162 TEST_HEADER include/spdk/endian.h 00:04:50.162 TEST_HEADER include/spdk/env_dpdk.h 00:04:50.162 TEST_HEADER include/spdk/env.h 00:04:50.162 TEST_HEADER include/spdk/event.h 00:04:50.162 TEST_HEADER include/spdk/fd_group.h 00:04:50.162 CC test/app/jsoncat/jsoncat.o 00:04:50.162 TEST_HEADER include/spdk/fd.h 00:04:50.162 TEST_HEADER include/spdk/file.h 00:04:50.162 TEST_HEADER include/spdk/fsdev.h 00:04:50.162 TEST_HEADER include/spdk/fsdev_module.h 00:04:50.162 TEST_HEADER include/spdk/ftl.h 00:04:50.162 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:50.162 TEST_HEADER include/spdk/gpt_spec.h 00:04:50.162 TEST_HEADER include/spdk/hexlify.h 00:04:50.162 TEST_HEADER include/spdk/histogram_data.h 00:04:50.162 CC examples/ioat/perf/perf.o 00:04:50.162 TEST_HEADER include/spdk/idxd.h 00:04:50.162 TEST_HEADER include/spdk/idxd_spec.h 00:04:50.162 TEST_HEADER include/spdk/init.h 00:04:50.162 TEST_HEADER include/spdk/ioat.h 00:04:50.162 TEST_HEADER include/spdk/ioat_spec.h 00:04:50.162 TEST_HEADER include/spdk/iscsi_spec.h 00:04:50.162 TEST_HEADER include/spdk/json.h 00:04:50.162 TEST_HEADER include/spdk/jsonrpc.h 00:04:50.162 TEST_HEADER include/spdk/keyring.h 00:04:50.162 TEST_HEADER include/spdk/keyring_module.h 00:04:50.162 TEST_HEADER include/spdk/likely.h 00:04:50.162 TEST_HEADER include/spdk/log.h 00:04:50.162 TEST_HEADER include/spdk/lvol.h 00:04:50.162 TEST_HEADER include/spdk/md5.h 00:04:50.162 TEST_HEADER include/spdk/memory.h 00:04:50.162 TEST_HEADER include/spdk/mmio.h 00:04:50.162 TEST_HEADER include/spdk/nbd.h 00:04:50.162 TEST_HEADER include/spdk/net.h 00:04:50.162 TEST_HEADER include/spdk/notify.h 00:04:50.162 TEST_HEADER include/spdk/nvme.h 00:04:50.162 TEST_HEADER include/spdk/nvme_intel.h 00:04:50.162 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:50.162 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:50.162 TEST_HEADER include/spdk/nvme_spec.h 00:04:50.162 TEST_HEADER include/spdk/nvme_zns.h 00:04:50.162 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:50.162 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:50.162 TEST_HEADER include/spdk/nvmf.h 00:04:50.162 TEST_HEADER include/spdk/nvmf_spec.h 00:04:50.162 TEST_HEADER include/spdk/nvmf_transport.h 00:04:50.162 TEST_HEADER include/spdk/opal.h 00:04:50.162 TEST_HEADER include/spdk/opal_spec.h 00:04:50.162 TEST_HEADER include/spdk/pci_ids.h 00:04:50.162 TEST_HEADER include/spdk/pipe.h 00:04:50.162 TEST_HEADER include/spdk/queue.h 00:04:50.162 CC app/spdk_nvme_perf/perf.o 00:04:50.162 TEST_HEADER include/spdk/reduce.h 00:04:50.162 TEST_HEADER include/spdk/rpc.h 00:04:50.162 TEST_HEADER include/spdk/scheduler.h 00:04:50.162 TEST_HEADER include/spdk/scsi.h 00:04:50.162 TEST_HEADER include/spdk/scsi_spec.h 00:04:50.162 TEST_HEADER include/spdk/sock.h 00:04:50.162 TEST_HEADER include/spdk/stdinc.h 00:04:50.162 TEST_HEADER include/spdk/string.h 00:04:50.162 TEST_HEADER include/spdk/thread.h 00:04:50.162 TEST_HEADER include/spdk/trace.h 00:04:50.162 TEST_HEADER include/spdk/trace_parser.h 00:04:50.162 TEST_HEADER include/spdk/tree.h 00:04:50.162 LINK histogram_perf 00:04:50.162 TEST_HEADER include/spdk/ublk.h 00:04:50.162 TEST_HEADER include/spdk/util.h 00:04:50.162 TEST_HEADER include/spdk/uuid.h 00:04:50.162 TEST_HEADER include/spdk/version.h 00:04:50.162 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:50.162 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:50.162 TEST_HEADER include/spdk/vhost.h 00:04:50.162 TEST_HEADER include/spdk/vmd.h 00:04:50.162 LINK jsoncat 00:04:50.162 LINK test_dma 00:04:50.162 TEST_HEADER include/spdk/xor.h 00:04:50.162 TEST_HEADER include/spdk/zipf.h 00:04:50.162 CXX test/cpp_headers/accel.o 00:04:50.421 LINK ioat_perf 00:04:50.421 CC examples/vmd/lsvmd/lsvmd.o 00:04:50.421 LINK vhost_fuzz 00:04:50.421 CXX test/cpp_headers/accel_module.o 00:04:50.421 LINK nvme_fuzz 00:04:50.421 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:50.421 CC examples/idxd/perf/perf.o 00:04:50.421 LINK lsvmd 00:04:50.421 CC examples/ioat/verify/verify.o 00:04:50.421 CXX test/cpp_headers/assert.o 00:04:50.680 CC examples/thread/thread/thread_ex.o 00:04:50.680 CC examples/vmd/led/led.o 00:04:50.680 LINK interrupt_tgt 00:04:50.680 CC examples/sock/hello_world/hello_sock.o 00:04:50.680 CC test/app/stub/stub.o 00:04:50.680 CXX test/cpp_headers/barrier.o 00:04:50.680 LINK led 00:04:50.680 LINK verify 00:04:50.680 CXX test/cpp_headers/base64.o 00:04:50.680 LINK thread 00:04:50.938 CXX test/cpp_headers/bdev.o 00:04:50.938 LINK idxd_perf 00:04:50.938 CXX test/cpp_headers/bdev_module.o 00:04:50.938 LINK stub 00:04:50.938 LINK hello_sock 00:04:50.938 CXX test/cpp_headers/bdev_zone.o 00:04:50.938 CXX test/cpp_headers/bit_array.o 00:04:50.938 CXX test/cpp_headers/bit_pool.o 00:04:50.938 LINK spdk_nvme_perf 00:04:50.938 CC test/env/mem_callbacks/mem_callbacks.o 00:04:50.938 CXX test/cpp_headers/blob_bdev.o 00:04:51.198 CC examples/accel/perf/accel_perf.o 00:04:51.198 CC examples/blob/hello_world/hello_blob.o 00:04:51.198 CC test/env/vtophys/vtophys.o 00:04:51.198 LINK mem_callbacks 00:04:51.198 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:51.198 CC examples/blob/cli/blobcli.o 00:04:51.198 CC examples/nvme/hello_world/hello_world.o 00:04:51.198 CXX test/cpp_headers/blobfs_bdev.o 00:04:51.198 CC app/spdk_nvme_identify/identify.o 00:04:51.198 LINK vtophys 00:04:51.456 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:51.456 CXX test/cpp_headers/blobfs.o 00:04:51.456 LINK hello_blob 00:04:51.456 LINK hello_world 00:04:51.456 LINK hello_fsdev 00:04:51.456 CC examples/nvme/reconnect/reconnect.o 00:04:51.456 LINK env_dpdk_post_init 00:04:51.456 CXX test/cpp_headers/blob.o 00:04:51.715 LINK iscsi_fuzz 00:04:51.715 LINK accel_perf 00:04:51.715 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:51.715 CC test/env/memory/memory_ut.o 00:04:51.715 CXX test/cpp_headers/conf.o 00:04:51.715 LINK blobcli 00:04:51.715 CC examples/nvme/arbitration/arbitration.o 00:04:51.715 CC examples/nvme/hotplug/hotplug.o 00:04:51.715 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:51.715 CXX test/cpp_headers/config.o 00:04:51.715 CXX test/cpp_headers/cpuset.o 00:04:51.973 LINK reconnect 00:04:51.973 CC examples/nvme/abort/abort.o 00:04:51.973 LINK cmb_copy 00:04:51.973 CC test/env/pci/pci_ut.o 00:04:51.973 CXX test/cpp_headers/crc16.o 00:04:51.973 LINK hotplug 00:04:51.973 LINK arbitration 00:04:51.973 LINK spdk_nvme_identify 00:04:51.973 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:52.232 CXX test/cpp_headers/crc32.o 00:04:52.232 LINK nvme_manage 00:04:52.232 CC examples/bdev/hello_world/hello_bdev.o 00:04:52.232 CC examples/bdev/bdevperf/bdevperf.o 00:04:52.232 CC app/spdk_nvme_discover/discovery_aer.o 00:04:52.232 LINK abort 00:04:52.232 CXX test/cpp_headers/crc64.o 00:04:52.232 LINK pmr_persistence 00:04:52.232 CXX test/cpp_headers/dif.o 00:04:52.232 LINK pci_ut 00:04:52.232 CC test/event/event_perf/event_perf.o 00:04:52.490 LINK hello_bdev 00:04:52.490 CXX test/cpp_headers/dma.o 00:04:52.490 CXX test/cpp_headers/endian.o 00:04:52.490 CXX test/cpp_headers/env_dpdk.o 00:04:52.490 LINK memory_ut 00:04:52.490 CXX test/cpp_headers/env.o 00:04:52.490 LINK spdk_nvme_discover 00:04:52.490 LINK event_perf 00:04:52.490 CXX test/cpp_headers/event.o 00:04:52.490 CXX test/cpp_headers/fd_group.o 00:04:52.490 CXX test/cpp_headers/fd.o 00:04:52.490 CXX test/cpp_headers/file.o 00:04:52.490 CXX test/cpp_headers/fsdev.o 00:04:52.490 CXX test/cpp_headers/fsdev_module.o 00:04:52.490 CXX test/cpp_headers/ftl.o 00:04:52.749 CC app/spdk_top/spdk_top.o 00:04:52.749 CC test/event/reactor/reactor.o 00:04:52.749 CXX test/cpp_headers/fuse_dispatcher.o 00:04:52.749 CXX test/cpp_headers/gpt_spec.o 00:04:52.749 CC test/event/reactor_perf/reactor_perf.o 00:04:52.749 CXX test/cpp_headers/hexlify.o 00:04:52.749 CXX test/cpp_headers/histogram_data.o 00:04:52.749 CC test/event/app_repeat/app_repeat.o 00:04:52.749 LINK reactor 00:04:52.749 CXX test/cpp_headers/idxd.o 00:04:52.749 LINK reactor_perf 00:04:52.749 CXX test/cpp_headers/idxd_spec.o 00:04:52.749 CXX test/cpp_headers/init.o 00:04:53.008 CC test/event/scheduler/scheduler.o 00:04:53.008 LINK app_repeat 00:04:53.008 CC app/vhost/vhost.o 00:04:53.008 CXX test/cpp_headers/ioat.o 00:04:53.008 CC test/nvme/aer/aer.o 00:04:53.008 LINK bdevperf 00:04:53.008 CC test/nvme/reset/reset.o 00:04:53.008 CC test/rpc_client/rpc_client_test.o 00:04:53.008 CC test/nvme/sgl/sgl.o 00:04:53.008 LINK vhost 00:04:53.267 CXX test/cpp_headers/ioat_spec.o 00:04:53.267 LINK scheduler 00:04:53.267 CC test/nvme/e2edp/nvme_dp.o 00:04:53.267 LINK rpc_client_test 00:04:53.267 CXX test/cpp_headers/iscsi_spec.o 00:04:53.267 LINK reset 00:04:53.267 LINK aer 00:04:53.267 LINK sgl 00:04:53.267 LINK spdk_top 00:04:53.526 CC test/nvme/overhead/overhead.o 00:04:53.527 CC test/nvme/err_injection/err_injection.o 00:04:53.527 CXX test/cpp_headers/json.o 00:04:53.527 CC examples/nvmf/nvmf/nvmf.o 00:04:53.527 CC test/nvme/startup/startup.o 00:04:53.527 CXX test/cpp_headers/jsonrpc.o 00:04:53.527 LINK nvme_dp 00:04:53.527 LINK err_injection 00:04:53.527 LINK startup 00:04:53.527 CXX test/cpp_headers/keyring.o 00:04:53.527 CC app/spdk_dd/spdk_dd.o 00:04:53.527 CXX test/cpp_headers/keyring_module.o 00:04:53.786 CC test/accel/dif/dif.o 00:04:53.786 LINK overhead 00:04:53.786 CC test/blobfs/mkfs/mkfs.o 00:04:53.786 CC app/fio/nvme/fio_plugin.o 00:04:53.786 CXX test/cpp_headers/likely.o 00:04:53.786 CXX test/cpp_headers/log.o 00:04:53.786 LINK nvmf 00:04:53.786 CC test/nvme/reserve/reserve.o 00:04:53.786 CXX test/cpp_headers/lvol.o 00:04:53.786 CXX test/cpp_headers/md5.o 00:04:53.786 LINK mkfs 00:04:53.786 CC test/nvme/simple_copy/simple_copy.o 00:04:53.786 CXX test/cpp_headers/memory.o 00:04:54.044 LINK spdk_dd 00:04:54.044 CC test/lvol/esnap/esnap.o 00:04:54.044 CXX test/cpp_headers/mmio.o 00:04:54.044 CXX test/cpp_headers/nbd.o 00:04:54.044 LINK reserve 00:04:54.044 CC test/nvme/boot_partition/boot_partition.o 00:04:54.044 CC test/nvme/connect_stress/connect_stress.o 00:04:54.044 LINK simple_copy 00:04:54.044 CC test/nvme/compliance/nvme_compliance.o 00:04:54.044 CXX test/cpp_headers/net.o 00:04:54.301 CXX test/cpp_headers/notify.o 00:04:54.301 CC test/nvme/fused_ordering/fused_ordering.o 00:04:54.301 LINK dif 00:04:54.301 LINK boot_partition 00:04:54.301 LINK spdk_nvme 00:04:54.301 LINK connect_stress 00:04:54.301 CXX test/cpp_headers/nvme.o 00:04:54.301 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:54.301 CXX test/cpp_headers/nvme_intel.o 00:04:54.301 LINK fused_ordering 00:04:54.301 CXX test/cpp_headers/nvme_ocssd.o 00:04:54.301 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:54.302 CC app/fio/bdev/fio_plugin.o 00:04:54.559 LINK nvme_compliance 00:04:54.559 CXX test/cpp_headers/nvme_spec.o 00:04:54.559 LINK doorbell_aers 00:04:54.559 CC test/nvme/fdp/fdp.o 00:04:54.559 CXX test/cpp_headers/nvme_zns.o 00:04:54.559 CXX test/cpp_headers/nvmf_cmd.o 00:04:54.559 CC test/nvme/cuse/cuse.o 00:04:54.559 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:54.559 CXX test/cpp_headers/nvmf.o 00:04:54.559 CXX test/cpp_headers/nvmf_spec.o 00:04:54.816 CC test/bdev/bdevio/bdevio.o 00:04:54.816 CXX test/cpp_headers/nvmf_transport.o 00:04:54.816 CXX test/cpp_headers/opal.o 00:04:54.816 CXX test/cpp_headers/opal_spec.o 00:04:54.816 CXX test/cpp_headers/pci_ids.o 00:04:54.816 CXX test/cpp_headers/pipe.o 00:04:54.816 LINK fdp 00:04:54.816 CXX test/cpp_headers/queue.o 00:04:54.816 CXX test/cpp_headers/reduce.o 00:04:54.816 CXX test/cpp_headers/rpc.o 00:04:54.816 CXX test/cpp_headers/scheduler.o 00:04:54.816 CXX test/cpp_headers/scsi.o 00:04:54.816 CXX test/cpp_headers/scsi_spec.o 00:04:54.816 LINK spdk_bdev 00:04:55.074 CXX test/cpp_headers/sock.o 00:04:55.074 CXX test/cpp_headers/stdinc.o 00:04:55.074 CXX test/cpp_headers/string.o 00:04:55.074 CXX test/cpp_headers/thread.o 00:04:55.074 CXX test/cpp_headers/trace.o 00:04:55.074 CXX test/cpp_headers/trace_parser.o 00:04:55.074 CXX test/cpp_headers/tree.o 00:04:55.074 LINK bdevio 00:04:55.074 CXX test/cpp_headers/ublk.o 00:04:55.074 CXX test/cpp_headers/util.o 00:04:55.074 CXX test/cpp_headers/uuid.o 00:04:55.074 CXX test/cpp_headers/version.o 00:04:55.074 CXX test/cpp_headers/vfio_user_pci.o 00:04:55.074 CXX test/cpp_headers/vfio_user_spec.o 00:04:55.074 CXX test/cpp_headers/vhost.o 00:04:55.074 CXX test/cpp_headers/vmd.o 00:04:55.333 CXX test/cpp_headers/xor.o 00:04:55.333 CXX test/cpp_headers/zipf.o 00:04:55.899 LINK cuse 00:04:59.183 LINK esnap 00:04:59.183 00:04:59.183 real 1m3.424s 00:04:59.183 user 5m14.035s 00:04:59.183 sys 0m53.915s 00:04:59.183 20:47:16 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:59.183 20:47:16 make -- common/autotest_common.sh@10 -- $ set +x 00:04:59.183 ************************************ 00:04:59.183 END TEST make 00:04:59.183 ************************************ 00:04:59.183 20:47:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:59.183 20:47:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:59.183 20:47:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:59.183 20:47:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.183 20:47:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:59.183 20:47:16 -- pm/common@44 -- $ pid=5806 00:04:59.183 20:47:16 -- pm/common@50 -- $ kill -TERM 5806 00:04:59.183 20:47:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.183 20:47:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:59.183 20:47:16 -- pm/common@44 -- $ pid=5807 00:04:59.183 20:47:16 -- pm/common@50 -- $ kill -TERM 5807 00:04:59.183 20:47:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:59.183 20:47:16 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:59.183 20:47:16 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.183 20:47:16 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.183 20:47:16 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.183 20:47:17 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.183 20:47:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.183 20:47:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.183 20:47:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.183 20:47:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.183 20:47:17 -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.183 20:47:17 -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.183 20:47:17 -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.183 20:47:17 -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.183 20:47:17 -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.183 20:47:17 -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.183 20:47:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.183 20:47:17 -- scripts/common.sh@344 -- # case "$op" in 00:04:59.183 20:47:17 -- scripts/common.sh@345 -- # : 1 00:04:59.183 20:47:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.183 20:47:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.183 20:47:17 -- scripts/common.sh@365 -- # decimal 1 00:04:59.183 20:47:17 -- scripts/common.sh@353 -- # local d=1 00:04:59.183 20:47:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.183 20:47:17 -- scripts/common.sh@355 -- # echo 1 00:04:59.183 20:47:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.183 20:47:17 -- scripts/common.sh@366 -- # decimal 2 00:04:59.183 20:47:17 -- scripts/common.sh@353 -- # local d=2 00:04:59.183 20:47:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.183 20:47:17 -- scripts/common.sh@355 -- # echo 2 00:04:59.183 20:47:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.183 20:47:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.183 20:47:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.183 20:47:17 -- scripts/common.sh@368 -- # return 0 00:04:59.183 20:47:17 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.183 20:47:17 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.183 --rc genhtml_branch_coverage=1 00:04:59.183 --rc genhtml_function_coverage=1 00:04:59.183 --rc genhtml_legend=1 00:04:59.183 --rc geninfo_all_blocks=1 00:04:59.183 --rc geninfo_unexecuted_blocks=1 00:04:59.183 00:04:59.183 ' 00:04:59.183 20:47:17 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.183 --rc genhtml_branch_coverage=1 00:04:59.183 --rc genhtml_function_coverage=1 00:04:59.183 --rc genhtml_legend=1 00:04:59.183 --rc geninfo_all_blocks=1 00:04:59.183 --rc geninfo_unexecuted_blocks=1 00:04:59.183 00:04:59.183 ' 00:04:59.183 20:47:17 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.183 --rc genhtml_branch_coverage=1 00:04:59.183 --rc genhtml_function_coverage=1 00:04:59.183 --rc genhtml_legend=1 00:04:59.183 --rc geninfo_all_blocks=1 00:04:59.183 --rc geninfo_unexecuted_blocks=1 00:04:59.183 00:04:59.183 ' 00:04:59.183 20:47:17 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.183 --rc genhtml_branch_coverage=1 00:04:59.183 --rc genhtml_function_coverage=1 00:04:59.183 --rc genhtml_legend=1 00:04:59.183 --rc geninfo_all_blocks=1 00:04:59.183 --rc geninfo_unexecuted_blocks=1 00:04:59.183 00:04:59.183 ' 00:04:59.183 20:47:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.183 20:47:17 -- nvmf/common.sh@7 -- # uname -s 00:04:59.183 20:47:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.183 20:47:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.183 20:47:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.183 20:47:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.183 20:47:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.183 20:47:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.183 20:47:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.183 20:47:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.183 20:47:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.183 20:47:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.183 20:47:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:873dcbcf-2835-4028-b4fb-8081b83de7a7 00:04:59.183 20:47:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=873dcbcf-2835-4028-b4fb-8081b83de7a7 00:04:59.183 20:47:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.183 20:47:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.183 20:47:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.183 20:47:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.183 20:47:17 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.183 20:47:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:59.183 20:47:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.183 20:47:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.183 20:47:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.183 20:47:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.183 20:47:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.183 20:47:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.183 20:47:17 -- paths/export.sh@5 -- # export PATH 00:04:59.183 20:47:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.183 20:47:17 -- nvmf/common.sh@51 -- # : 0 00:04:59.183 20:47:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:59.183 20:47:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:59.183 20:47:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.183 20:47:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.183 20:47:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.183 20:47:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:59.183 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:59.183 20:47:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:59.183 20:47:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:59.183 20:47:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:59.183 20:47:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:59.183 20:47:17 -- spdk/autotest.sh@32 -- # uname -s 00:04:59.183 20:47:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:59.183 20:47:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:59.183 20:47:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.183 20:47:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:59.183 20:47:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.183 20:47:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:59.183 20:47:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:59.183 20:47:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:59.183 20:47:17 -- spdk/autotest.sh@48 -- # udevadm_pid=66237 00:04:59.183 20:47:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:59.183 20:47:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:59.183 20:47:17 -- pm/common@17 -- # local monitor 00:04:59.183 20:47:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.183 20:47:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.183 20:47:17 -- pm/common@25 -- # sleep 1 00:04:59.183 20:47:17 -- pm/common@21 -- # date +%s 00:04:59.183 20:47:17 -- pm/common@21 -- # date +%s 00:04:59.183 20:47:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732135637 00:04:59.184 20:47:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732135637 00:04:59.184 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732135637_collect-vmstat.pm.log 00:04:59.184 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732135637_collect-cpu-load.pm.log 00:05:00.118 20:47:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:00.118 20:47:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:00.118 20:47:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.118 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:05:00.118 20:47:18 -- spdk/autotest.sh@59 -- # create_test_list 00:05:00.118 20:47:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:00.118 20:47:18 -- common/autotest_common.sh@10 -- # set +x 00:05:00.118 20:47:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:00.118 20:47:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:00.118 20:47:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:00.118 20:47:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:00.118 20:47:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:00.118 20:47:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:00.118 20:47:18 -- common/autotest_common.sh@1457 -- # uname 00:05:00.118 20:47:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:00.118 20:47:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:00.119 20:47:18 -- common/autotest_common.sh@1477 -- # uname 00:05:00.119 20:47:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:00.119 20:47:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:00.119 20:47:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:00.377 lcov: LCOV version 1.15 00:05:00.377 20:47:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:15.259 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:15.259 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:30.146 20:47:46 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:30.146 20:47:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.146 20:47:46 -- common/autotest_common.sh@10 -- # set +x 00:05:30.146 20:47:46 -- spdk/autotest.sh@78 -- # rm -f 00:05:30.146 20:47:46 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:30.146 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:30.146 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:30.146 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:30.146 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:30.146 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:30.146 20:47:47 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:30.146 20:47:47 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:30.146 20:47:47 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:30.146 20:47:47 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:30.146 20:47:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.146 20:47:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:30.146 20:47:47 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:30.146 20:47:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.146 20:47:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:30.146 20:47:47 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:30.146 20:47:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.146 20:47:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:30.146 20:47:47 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:30.146 20:47:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.146 20:47:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:05:30.146 20:47:47 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:30.146 20:47:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.146 20:47:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:05:30.146 20:47:47 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:30.146 20:47:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.146 20:47:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:05:30.146 20:47:47 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:30.146 20:47:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:30.146 20:47:47 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:30.146 20:47:47 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:30.146 20:47:47 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:30.146 20:47:47 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:30.146 20:47:47 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:30.146 20:47:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.146 20:47:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.146 20:47:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:30.146 20:47:47 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:30.146 20:47:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:30.146 No valid GPT data, bailing 00:05:30.146 20:47:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:30.146 20:47:47 -- scripts/common.sh@394 -- # pt= 00:05:30.146 20:47:47 -- scripts/common.sh@395 -- # return 1 00:05:30.146 20:47:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:30.146 1+0 records in 00:05:30.146 1+0 records out 00:05:30.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192232 s, 54.5 MB/s 00:05:30.146 20:47:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.146 20:47:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.146 20:47:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:30.146 20:47:47 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:30.146 20:47:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:30.146 No valid GPT data, bailing 00:05:30.146 20:47:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:30.146 20:47:47 -- scripts/common.sh@394 -- # pt= 00:05:30.146 20:47:47 -- scripts/common.sh@395 -- # return 1 00:05:30.146 20:47:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:30.146 1+0 records in 00:05:30.146 1+0 records out 00:05:30.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00397252 s, 264 MB/s 00:05:30.146 20:47:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.146 20:47:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.147 20:47:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:30.147 20:47:47 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:30.147 20:47:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:30.147 No valid GPT data, bailing 00:05:30.147 20:47:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:30.147 20:47:47 -- scripts/common.sh@394 -- # pt= 00:05:30.147 20:47:47 -- scripts/common.sh@395 -- # return 1 00:05:30.147 20:47:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:30.147 1+0 records in 00:05:30.147 1+0 records out 00:05:30.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497327 s, 211 MB/s 00:05:30.147 20:47:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.147 20:47:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.147 20:47:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:30.147 20:47:47 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:30.147 20:47:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:30.147 No valid GPT data, bailing 00:05:30.147 20:47:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:30.147 20:47:47 -- scripts/common.sh@394 -- # pt= 00:05:30.147 20:47:47 -- scripts/common.sh@395 -- # return 1 00:05:30.147 20:47:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:30.147 1+0 records in 00:05:30.147 1+0 records out 00:05:30.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00539226 s, 194 MB/s 00:05:30.147 20:47:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.147 20:47:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.147 20:47:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:30.147 20:47:47 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:30.147 20:47:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:30.147 No valid GPT data, bailing 00:05:30.147 20:47:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:30.147 20:47:47 -- scripts/common.sh@394 -- # pt= 00:05:30.147 20:47:47 -- scripts/common.sh@395 -- # return 1 00:05:30.147 20:47:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:30.147 1+0 records in 00:05:30.147 1+0 records out 00:05:30.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00534839 s, 196 MB/s 00:05:30.147 20:47:47 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:30.147 20:47:47 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:30.147 20:47:47 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:30.147 20:47:47 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:30.147 20:47:47 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:30.147 No valid GPT data, bailing 00:05:30.147 20:47:47 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:30.147 20:47:47 -- scripts/common.sh@394 -- # pt= 00:05:30.147 20:47:47 -- scripts/common.sh@395 -- # return 1 00:05:30.147 20:47:47 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:30.147 1+0 records in 00:05:30.147 1+0 records out 00:05:30.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488226 s, 215 MB/s 00:05:30.147 20:47:47 -- spdk/autotest.sh@105 -- # sync 00:05:30.417 20:47:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:30.417 20:47:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:30.417 20:47:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:32.319 20:47:50 -- spdk/autotest.sh@111 -- # uname -s 00:05:32.319 20:47:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:32.319 20:47:50 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:32.319 20:47:50 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:32.577 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:33.144 Hugepages 00:05:33.144 node hugesize free / total 00:05:33.144 node0 1048576kB 0 / 0 00:05:33.144 node0 2048kB 0 / 0 00:05:33.144 00:05:33.144 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.144 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:33.144 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:33.144 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:33.144 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:33.402 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:33.402 20:47:51 -- spdk/autotest.sh@117 -- # uname -s 00:05:33.402 20:47:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:33.402 20:47:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:33.402 20:47:51 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:33.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.228 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.228 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.228 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.486 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.486 20:47:52 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:35.420 20:47:53 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:35.420 20:47:53 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:35.420 20:47:53 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:35.420 20:47:53 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:35.420 20:47:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:35.420 20:47:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:35.420 20:47:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.420 20:47:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.420 20:47:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:35.420 20:47:53 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:35.420 20:47:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:35.420 20:47:53 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.937 Waiting for block devices as requested 00:05:35.937 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:35.937 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.197 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.197 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:41.465 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:41.465 20:47:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:41.465 20:47:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:41.465 20:47:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:41.465 20:47:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:41.465 20:47:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:41.465 20:47:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:41.465 20:47:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:41.465 20:47:59 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:41.465 20:47:59 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1543 -- # continue 00:05:41.465 20:47:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:41.465 20:47:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:41.465 20:47:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:41.465 20:47:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:41.465 20:47:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:41.465 20:47:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:41.465 20:47:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:41.465 20:47:59 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:41.465 20:47:59 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1543 -- # continue 00:05:41.465 20:47:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:41.465 20:47:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:41.465 20:47:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:41.465 20:47:59 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:41.465 20:47:59 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:41.465 20:47:59 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:41.465 20:47:59 -- common/autotest_common.sh@1543 -- # continue 00:05:41.465 20:47:59 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:41.465 20:47:59 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:41.465 20:47:59 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:41.466 20:47:59 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:41.466 20:47:59 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:41.466 20:47:59 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:41.466 20:47:59 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:41.466 20:47:59 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:41.466 20:47:59 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:41.466 20:47:59 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:41.466 20:47:59 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:41.466 20:47:59 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:41.466 20:47:59 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:41.466 20:47:59 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:41.466 20:47:59 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:41.466 20:47:59 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:41.466 20:47:59 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:41.466 20:47:59 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:41.466 20:47:59 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:41.466 20:47:59 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:41.466 20:47:59 -- common/autotest_common.sh@1543 -- # continue 00:05:41.466 20:47:59 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:41.466 20:47:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.466 20:47:59 -- common/autotest_common.sh@10 -- # set +x 00:05:41.466 20:47:59 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:41.466 20:47:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.466 20:47:59 -- common/autotest_common.sh@10 -- # set +x 00:05:41.466 20:47:59 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.293 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.293 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.552 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.552 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.552 20:48:00 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:42.552 20:48:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:42.552 20:48:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.552 20:48:00 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:42.552 20:48:00 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:42.552 20:48:00 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:42.552 20:48:00 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:42.552 20:48:00 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:42.552 20:48:00 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:42.552 20:48:00 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:42.552 20:48:00 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:42.552 20:48:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:42.552 20:48:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:42.552 20:48:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:42.552 20:48:00 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:42.552 20:48:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:42.552 20:48:00 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:42.552 20:48:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:42.552 20:48:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:42.552 20:48:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:42.552 20:48:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:42.552 20:48:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.552 20:48:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:42.552 20:48:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:42.552 20:48:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:42.552 20:48:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.552 20:48:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:42.552 20:48:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:42.552 20:48:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:42.552 20:48:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.552 20:48:00 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:42.552 20:48:00 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:42.552 20:48:00 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:42.552 20:48:00 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:42.552 20:48:00 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:42.552 20:48:00 -- common/autotest_common.sh@1572 -- # return 0 00:05:42.552 20:48:00 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:42.552 20:48:00 -- common/autotest_common.sh@1580 -- # return 0 00:05:42.552 20:48:00 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:42.552 20:48:00 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:42.552 20:48:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:42.552 20:48:00 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:42.552 20:48:00 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:42.552 20:48:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.552 20:48:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.552 20:48:00 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:42.552 20:48:00 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:42.552 20:48:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.552 20:48:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.552 20:48:00 -- common/autotest_common.sh@10 -- # set +x 00:05:42.552 ************************************ 00:05:42.552 START TEST env 00:05:42.552 ************************************ 00:05:42.552 20:48:00 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:42.810 * Looking for test storage... 00:05:42.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:42.810 20:48:00 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.810 20:48:00 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.810 20:48:00 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.810 20:48:00 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.810 20:48:00 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.810 20:48:00 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.810 20:48:00 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.810 20:48:00 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.810 20:48:00 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.810 20:48:00 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.811 20:48:00 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.811 20:48:00 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.811 20:48:00 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.811 20:48:00 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.811 20:48:00 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.811 20:48:00 env -- scripts/common.sh@344 -- # case "$op" in 00:05:42.811 20:48:00 env -- scripts/common.sh@345 -- # : 1 00:05:42.811 20:48:00 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.811 20:48:00 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.811 20:48:00 env -- scripts/common.sh@365 -- # decimal 1 00:05:42.811 20:48:00 env -- scripts/common.sh@353 -- # local d=1 00:05:42.811 20:48:00 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.811 20:48:00 env -- scripts/common.sh@355 -- # echo 1 00:05:42.811 20:48:00 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.811 20:48:00 env -- scripts/common.sh@366 -- # decimal 2 00:05:42.811 20:48:00 env -- scripts/common.sh@353 -- # local d=2 00:05:42.811 20:48:00 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.811 20:48:00 env -- scripts/common.sh@355 -- # echo 2 00:05:42.811 20:48:00 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.811 20:48:00 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.811 20:48:00 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.811 20:48:00 env -- scripts/common.sh@368 -- # return 0 00:05:42.811 20:48:00 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.811 20:48:00 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.811 --rc genhtml_branch_coverage=1 00:05:42.811 --rc genhtml_function_coverage=1 00:05:42.811 --rc genhtml_legend=1 00:05:42.811 --rc geninfo_all_blocks=1 00:05:42.811 --rc geninfo_unexecuted_blocks=1 00:05:42.811 00:05:42.811 ' 00:05:42.811 20:48:00 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.811 --rc genhtml_branch_coverage=1 00:05:42.811 --rc genhtml_function_coverage=1 00:05:42.811 --rc genhtml_legend=1 00:05:42.811 --rc geninfo_all_blocks=1 00:05:42.811 --rc geninfo_unexecuted_blocks=1 00:05:42.811 00:05:42.811 ' 00:05:42.811 20:48:00 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.811 --rc genhtml_branch_coverage=1 00:05:42.811 --rc genhtml_function_coverage=1 00:05:42.811 --rc genhtml_legend=1 00:05:42.811 --rc geninfo_all_blocks=1 00:05:42.811 --rc geninfo_unexecuted_blocks=1 00:05:42.811 00:05:42.811 ' 00:05:42.811 20:48:00 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.811 --rc genhtml_branch_coverage=1 00:05:42.811 --rc genhtml_function_coverage=1 00:05:42.811 --rc genhtml_legend=1 00:05:42.811 --rc geninfo_all_blocks=1 00:05:42.811 --rc geninfo_unexecuted_blocks=1 00:05:42.811 00:05:42.811 ' 00:05:42.811 20:48:00 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:42.811 20:48:00 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.811 20:48:00 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.811 20:48:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.811 ************************************ 00:05:42.811 START TEST env_memory 00:05:42.811 ************************************ 00:05:42.811 20:48:00 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:42.811 00:05:42.811 00:05:42.811 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.811 http://cunit.sourceforge.net/ 00:05:42.811 00:05:42.811 00:05:42.811 Suite: memory 00:05:42.811 Test: alloc and free memory map ...[2024-11-20 20:48:00.877447] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:42.811 passed 00:05:42.811 Test: mem map translation ...[2024-11-20 20:48:00.916457] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:42.811 [2024-11-20 20:48:00.916623] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:42.811 [2024-11-20 20:48:00.916808] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:42.811 [2024-11-20 20:48:00.916829] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:43.070 passed 00:05:43.070 Test: mem map registration ...[2024-11-20 20:48:00.984979] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:43.070 [2024-11-20 20:48:00.985017] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:43.070 passed 00:05:43.070 Test: mem map adjacent registrations ...passed 00:05:43.070 00:05:43.070 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.070 suites 1 1 n/a 0 0 00:05:43.070 tests 4 4 4 0 0 00:05:43.070 asserts 152 152 152 0 n/a 00:05:43.070 00:05:43.070 Elapsed time = 0.233 seconds 00:05:43.070 00:05:43.070 ************************************ 00:05:43.070 END TEST env_memory 00:05:43.070 ************************************ 00:05:43.070 real 0m0.263s 00:05:43.070 user 0m0.244s 00:05:43.070 sys 0m0.011s 00:05:43.070 20:48:01 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.070 20:48:01 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:43.070 20:48:01 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.070 20:48:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.070 20:48:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.070 20:48:01 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.070 ************************************ 00:05:43.070 START TEST env_vtophys 00:05:43.070 ************************************ 00:05:43.070 20:48:01 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:43.070 EAL: lib.eal log level changed from notice to debug 00:05:43.070 EAL: Detected lcore 0 as core 0 on socket 0 00:05:43.070 EAL: Detected lcore 1 as core 0 on socket 0 00:05:43.070 EAL: Detected lcore 2 as core 0 on socket 0 00:05:43.070 EAL: Detected lcore 3 as core 0 on socket 0 00:05:43.070 EAL: Detected lcore 4 as core 0 on socket 0 00:05:43.070 EAL: Detected lcore 5 as core 0 on socket 0 00:05:43.070 EAL: Detected lcore 6 as core 0 on socket 0 00:05:43.070 EAL: Detected lcore 7 as core 0 on socket 0 00:05:43.070 EAL: Detected lcore 8 as core 0 on socket 0 00:05:43.070 EAL: Detected lcore 9 as core 0 on socket 0 00:05:43.070 EAL: Maximum logical cores by configuration: 128 00:05:43.070 EAL: Detected CPU lcores: 10 00:05:43.070 EAL: Detected NUMA nodes: 1 00:05:43.070 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:43.070 EAL: Detected shared linkage of DPDK 00:05:43.070 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:43.070 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:43.070 EAL: Registered [vdev] bus. 00:05:43.070 EAL: bus.vdev log level changed from disabled to notice 00:05:43.070 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:43.070 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:43.070 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:43.070 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:43.070 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:43.070 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:43.070 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:43.070 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:43.070 EAL: No shared files mode enabled, IPC will be disabled 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Selected IOVA mode 'PA' 00:05:43.381 EAL: Probing VFIO support... 00:05:43.381 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.381 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:43.381 EAL: Ask a virtual area of 0x2e000 bytes 00:05:43.381 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:43.381 EAL: Setting up physically contiguous memory... 00:05:43.381 EAL: Setting maximum number of open files to 524288 00:05:43.381 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:43.381 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:43.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.381 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:43.381 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.381 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:43.381 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:43.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.381 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:43.381 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.381 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:43.381 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:43.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.381 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:43.381 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.381 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:43.381 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:43.381 EAL: Ask a virtual area of 0x61000 bytes 00:05:43.381 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:43.381 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:43.381 EAL: Ask a virtual area of 0x400000000 bytes 00:05:43.381 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:43.381 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:43.381 EAL: Hugepages will be freed exactly as allocated. 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: TSC frequency is ~2600000 KHz 00:05:43.381 EAL: Main lcore 0 is ready (tid=7efeb5625a40;cpuset=[0]) 00:05:43.381 EAL: Trying to obtain current memory policy. 00:05:43.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.381 EAL: Restoring previous memory policy: 0 00:05:43.381 EAL: request: mp_malloc_sync 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: Heap on socket 0 was expanded by 2MB 00:05:43.381 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:43.381 EAL: No shared files mode enabled, IPC is disabled 00:05:43.381 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:43.381 EAL: Mem event callback 'spdk:(nil)' registered 00:05:43.381 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:43.381 00:05:43.381 00:05:43.381 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.381 http://cunit.sourceforge.net/ 00:05:43.381 00:05:43.381 00:05:43.381 Suite: components_suite 00:05:43.640 Test: vtophys_malloc_test ...passed 00:05:43.640 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:43.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.640 EAL: Restoring previous memory policy: 4 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was expanded by 4MB 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was shrunk by 4MB 00:05:43.640 EAL: Trying to obtain current memory policy. 00:05:43.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.640 EAL: Restoring previous memory policy: 4 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was expanded by 6MB 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was shrunk by 6MB 00:05:43.640 EAL: Trying to obtain current memory policy. 00:05:43.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.640 EAL: Restoring previous memory policy: 4 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was expanded by 10MB 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was shrunk by 10MB 00:05:43.640 EAL: Trying to obtain current memory policy. 00:05:43.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.640 EAL: Restoring previous memory policy: 4 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was expanded by 18MB 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was shrunk by 18MB 00:05:43.640 EAL: Trying to obtain current memory policy. 00:05:43.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.640 EAL: Restoring previous memory policy: 4 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was expanded by 34MB 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was shrunk by 34MB 00:05:43.640 EAL: Trying to obtain current memory policy. 00:05:43.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.640 EAL: Restoring previous memory policy: 4 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was expanded by 66MB 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was shrunk by 66MB 00:05:43.640 EAL: Trying to obtain current memory policy. 00:05:43.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.640 EAL: Restoring previous memory policy: 4 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was expanded by 130MB 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was shrunk by 130MB 00:05:43.640 EAL: Trying to obtain current memory policy. 00:05:43.640 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.640 EAL: Restoring previous memory policy: 4 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.640 EAL: request: mp_malloc_sync 00:05:43.640 EAL: No shared files mode enabled, IPC is disabled 00:05:43.640 EAL: Heap on socket 0 was expanded by 258MB 00:05:43.640 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.898 EAL: request: mp_malloc_sync 00:05:43.898 EAL: No shared files mode enabled, IPC is disabled 00:05:43.898 EAL: Heap on socket 0 was shrunk by 258MB 00:05:43.898 EAL: Trying to obtain current memory policy. 00:05:43.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.898 EAL: Restoring previous memory policy: 4 00:05:43.898 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.898 EAL: request: mp_malloc_sync 00:05:43.898 EAL: No shared files mode enabled, IPC is disabled 00:05:43.898 EAL: Heap on socket 0 was expanded by 514MB 00:05:43.898 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.898 EAL: request: mp_malloc_sync 00:05:43.898 EAL: No shared files mode enabled, IPC is disabled 00:05:43.898 EAL: Heap on socket 0 was shrunk by 514MB 00:05:43.898 EAL: Trying to obtain current memory policy. 00:05:43.898 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:44.158 EAL: Restoring previous memory policy: 4 00:05:44.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.158 EAL: request: mp_malloc_sync 00:05:44.158 EAL: No shared files mode enabled, IPC is disabled 00:05:44.158 EAL: Heap on socket 0 was expanded by 1026MB 00:05:44.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.420 passed 00:05:44.420 00:05:44.420 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.420 suites 1 1 n/a 0 0 00:05:44.420 tests 2 2 2 0 0 00:05:44.420 asserts 6233 6233 6233 0 n/a 00:05:44.420 00:05:44.420 Elapsed time = 0.984 seconds 00:05:44.420 EAL: request: mp_malloc_sync 00:05:44.420 EAL: No shared files mode enabled, IPC is disabled 00:05:44.420 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:44.420 EAL: Calling mem event callback 'spdk:(nil)' 00:05:44.420 EAL: request: mp_malloc_sync 00:05:44.420 EAL: No shared files mode enabled, IPC is disabled 00:05:44.420 EAL: Heap on socket 0 was shrunk by 2MB 00:05:44.420 EAL: No shared files mode enabled, IPC is disabled 00:05:44.420 EAL: No shared files mode enabled, IPC is disabled 00:05:44.420 EAL: No shared files mode enabled, IPC is disabled 00:05:44.420 00:05:44.420 real 0m1.214s 00:05:44.420 user 0m0.488s 00:05:44.420 sys 0m0.582s 00:05:44.420 20:48:02 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.420 ************************************ 00:05:44.420 END TEST env_vtophys 00:05:44.420 ************************************ 00:05:44.420 20:48:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:44.420 20:48:02 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:44.420 20:48:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.420 20:48:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.420 20:48:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.420 ************************************ 00:05:44.420 START TEST env_pci 00:05:44.420 ************************************ 00:05:44.420 20:48:02 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:44.420 00:05:44.420 00:05:44.420 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.420 http://cunit.sourceforge.net/ 00:05:44.420 00:05:44.420 00:05:44.420 Suite: pci 00:05:44.420 Test: pci_hook ...[2024-11-20 20:48:02.440858] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68984 has claimed it 00:05:44.420 passed 00:05:44.420 00:05:44.420 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.420 suites 1 1 n/a 0 0 00:05:44.420 tests 1 1 1 0 0 00:05:44.420 asserts 25 25 25 0 n/a 00:05:44.420 00:05:44.420 Elapsed time = 0.005 seconds 00:05:44.420 EAL: Cannot find device (10000:00:01.0) 00:05:44.420 EAL: Failed to attach device on primary process 00:05:44.420 00:05:44.420 real 0m0.054s 00:05:44.420 user 0m0.022s 00:05:44.420 sys 0m0.032s 00:05:44.420 20:48:02 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.420 20:48:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:44.420 ************************************ 00:05:44.420 END TEST env_pci 00:05:44.420 ************************************ 00:05:44.420 20:48:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:44.420 20:48:02 env -- env/env.sh@15 -- # uname 00:05:44.420 20:48:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:44.420 20:48:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:44.420 20:48:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:44.420 20:48:02 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:44.420 20:48:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.420 20:48:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.680 ************************************ 00:05:44.680 START TEST env_dpdk_post_init 00:05:44.680 ************************************ 00:05:44.680 20:48:02 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:44.680 EAL: Detected CPU lcores: 10 00:05:44.680 EAL: Detected NUMA nodes: 1 00:05:44.680 EAL: Detected shared linkage of DPDK 00:05:44.680 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.680 EAL: Selected IOVA mode 'PA' 00:05:44.680 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.680 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:44.680 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:44.680 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:44.680 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:44.680 Starting DPDK initialization... 00:05:44.680 Starting SPDK post initialization... 00:05:44.680 SPDK NVMe probe 00:05:44.680 Attaching to 0000:00:10.0 00:05:44.680 Attaching to 0000:00:11.0 00:05:44.680 Attaching to 0000:00:12.0 00:05:44.680 Attaching to 0000:00:13.0 00:05:44.680 Attached to 0000:00:11.0 00:05:44.680 Attached to 0000:00:13.0 00:05:44.680 Attached to 0000:00:10.0 00:05:44.680 Attached to 0000:00:12.0 00:05:44.680 Cleaning up... 00:05:44.680 00:05:44.680 real 0m0.220s 00:05:44.680 user 0m0.069s 00:05:44.680 sys 0m0.052s 00:05:44.680 20:48:02 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.680 20:48:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.680 ************************************ 00:05:44.680 END TEST env_dpdk_post_init 00:05:44.680 ************************************ 00:05:44.940 20:48:02 env -- env/env.sh@26 -- # uname 00:05:44.940 20:48:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:44.940 20:48:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.940 20:48:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.940 20:48:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.940 20:48:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.940 ************************************ 00:05:44.940 START TEST env_mem_callbacks 00:05:44.940 ************************************ 00:05:44.940 20:48:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:44.940 EAL: Detected CPU lcores: 10 00:05:44.940 EAL: Detected NUMA nodes: 1 00:05:44.940 EAL: Detected shared linkage of DPDK 00:05:44.940 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:44.940 EAL: Selected IOVA mode 'PA' 00:05:44.940 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:44.940 00:05:44.940 00:05:44.940 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.940 http://cunit.sourceforge.net/ 00:05:44.940 00:05:44.940 00:05:44.940 Suite: memory 00:05:44.940 Test: test ... 00:05:44.940 register 0x200000200000 2097152 00:05:44.940 malloc 3145728 00:05:44.940 register 0x200000400000 4194304 00:05:44.940 buf 0x200000500000 len 3145728 PASSED 00:05:44.940 malloc 64 00:05:44.940 buf 0x2000004fff40 len 64 PASSED 00:05:44.940 malloc 4194304 00:05:44.940 register 0x200000800000 6291456 00:05:44.940 buf 0x200000a00000 len 4194304 PASSED 00:05:44.940 free 0x200000500000 3145728 00:05:44.940 free 0x2000004fff40 64 00:05:44.940 unregister 0x200000400000 4194304 PASSED 00:05:44.940 free 0x200000a00000 4194304 00:05:44.940 unregister 0x200000800000 6291456 PASSED 00:05:44.940 malloc 8388608 00:05:44.940 register 0x200000400000 10485760 00:05:44.940 buf 0x200000600000 len 8388608 PASSED 00:05:44.940 free 0x200000600000 8388608 00:05:44.940 unregister 0x200000400000 10485760 PASSED 00:05:44.940 passed 00:05:44.940 00:05:44.940 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.940 suites 1 1 n/a 0 0 00:05:44.940 tests 1 1 1 0 0 00:05:44.940 asserts 15 15 15 0 n/a 00:05:44.940 00:05:44.940 Elapsed time = 0.007 seconds 00:05:44.940 00:05:44.940 real 0m0.152s 00:05:44.940 user 0m0.019s 00:05:44.940 sys 0m0.031s 00:05:44.940 ************************************ 00:05:44.940 END TEST env_mem_callbacks 00:05:44.940 20:48:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.940 20:48:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:44.940 ************************************ 00:05:44.940 00:05:44.940 real 0m2.344s 00:05:44.940 user 0m0.995s 00:05:44.940 sys 0m0.919s 00:05:44.940 20:48:03 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.940 20:48:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:44.940 ************************************ 00:05:44.940 END TEST env 00:05:44.940 ************************************ 00:05:44.940 20:48:03 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:44.940 20:48:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.940 20:48:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.940 20:48:03 -- common/autotest_common.sh@10 -- # set +x 00:05:45.199 ************************************ 00:05:45.199 START TEST rpc 00:05:45.199 ************************************ 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:45.199 * Looking for test storage... 00:05:45.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.199 20:48:03 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.199 20:48:03 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.199 20:48:03 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.199 20:48:03 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.199 20:48:03 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.199 20:48:03 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.199 20:48:03 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.199 20:48:03 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.199 20:48:03 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.199 20:48:03 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.199 20:48:03 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.199 20:48:03 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:45.199 20:48:03 rpc -- scripts/common.sh@345 -- # : 1 00:05:45.199 20:48:03 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.199 20:48:03 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.199 20:48:03 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:45.199 20:48:03 rpc -- scripts/common.sh@353 -- # local d=1 00:05:45.199 20:48:03 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.199 20:48:03 rpc -- scripts/common.sh@355 -- # echo 1 00:05:45.199 20:48:03 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.199 20:48:03 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:45.199 20:48:03 rpc -- scripts/common.sh@353 -- # local d=2 00:05:45.199 20:48:03 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.199 20:48:03 rpc -- scripts/common.sh@355 -- # echo 2 00:05:45.199 20:48:03 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.199 20:48:03 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.199 20:48:03 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.199 20:48:03 rpc -- scripts/common.sh@368 -- # return 0 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.199 --rc genhtml_branch_coverage=1 00:05:45.199 --rc genhtml_function_coverage=1 00:05:45.199 --rc genhtml_legend=1 00:05:45.199 --rc geninfo_all_blocks=1 00:05:45.199 --rc geninfo_unexecuted_blocks=1 00:05:45.199 00:05:45.199 ' 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.199 --rc genhtml_branch_coverage=1 00:05:45.199 --rc genhtml_function_coverage=1 00:05:45.199 --rc genhtml_legend=1 00:05:45.199 --rc geninfo_all_blocks=1 00:05:45.199 --rc geninfo_unexecuted_blocks=1 00:05:45.199 00:05:45.199 ' 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.199 --rc genhtml_branch_coverage=1 00:05:45.199 --rc genhtml_function_coverage=1 00:05:45.199 --rc genhtml_legend=1 00:05:45.199 --rc geninfo_all_blocks=1 00:05:45.199 --rc geninfo_unexecuted_blocks=1 00:05:45.199 00:05:45.199 ' 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.199 --rc genhtml_branch_coverage=1 00:05:45.199 --rc genhtml_function_coverage=1 00:05:45.199 --rc genhtml_legend=1 00:05:45.199 --rc geninfo_all_blocks=1 00:05:45.199 --rc geninfo_unexecuted_blocks=1 00:05:45.199 00:05:45.199 ' 00:05:45.199 20:48:03 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69106 00:05:45.199 20:48:03 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.199 20:48:03 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69106 00:05:45.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@835 -- # '[' -z 69106 ']' 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.199 20:48:03 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.199 20:48:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.199 [2024-11-20 20:48:03.272137] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:05:45.199 [2024-11-20 20:48:03.272260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69106 ] 00:05:45.458 [2024-11-20 20:48:03.414815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.458 [2024-11-20 20:48:03.434581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:45.458 [2024-11-20 20:48:03.434634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69106' to capture a snapshot of events at runtime. 00:05:45.458 [2024-11-20 20:48:03.434649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.458 [2024-11-20 20:48:03.434657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.458 [2024-11-20 20:48:03.434667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69106 for offline analysis/debug. 00:05:45.458 [2024-11-20 20:48:03.434983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.031 20:48:04 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.031 20:48:04 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:46.031 20:48:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.031 20:48:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:46.031 20:48:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:46.031 20:48:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:46.031 20:48:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.031 20:48:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.031 20:48:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.031 ************************************ 00:05:46.031 START TEST rpc_integrity 00:05:46.031 ************************************ 00:05:46.031 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:46.031 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.031 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.031 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.031 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.031 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.031 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.293 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.293 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.293 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.293 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.293 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.293 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:46.293 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.293 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.293 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.293 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.293 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.293 { 00:05:46.293 "name": "Malloc0", 00:05:46.293 "aliases": [ 00:05:46.293 "93da2cd4-d690-437c-8cb9-e0649f8d284a" 00:05:46.293 ], 00:05:46.293 "product_name": "Malloc disk", 00:05:46.293 "block_size": 512, 00:05:46.293 "num_blocks": 16384, 00:05:46.293 "uuid": "93da2cd4-d690-437c-8cb9-e0649f8d284a", 00:05:46.293 "assigned_rate_limits": { 00:05:46.293 "rw_ios_per_sec": 0, 00:05:46.293 "rw_mbytes_per_sec": 0, 00:05:46.293 "r_mbytes_per_sec": 0, 00:05:46.293 "w_mbytes_per_sec": 0 00:05:46.293 }, 00:05:46.293 "claimed": false, 00:05:46.293 "zoned": false, 00:05:46.293 "supported_io_types": { 00:05:46.293 "read": true, 00:05:46.293 "write": true, 00:05:46.293 "unmap": true, 00:05:46.293 "flush": true, 00:05:46.293 "reset": true, 00:05:46.293 "nvme_admin": false, 00:05:46.293 "nvme_io": false, 00:05:46.293 "nvme_io_md": false, 00:05:46.293 "write_zeroes": true, 00:05:46.293 "zcopy": true, 00:05:46.293 "get_zone_info": false, 00:05:46.293 "zone_management": false, 00:05:46.293 "zone_append": false, 00:05:46.293 "compare": false, 00:05:46.293 "compare_and_write": false, 00:05:46.293 "abort": true, 00:05:46.293 "seek_hole": false, 00:05:46.293 "seek_data": false, 00:05:46.293 "copy": true, 00:05:46.293 "nvme_iov_md": false 00:05:46.293 }, 00:05:46.293 "memory_domains": [ 00:05:46.293 { 00:05:46.293 "dma_device_id": "system", 00:05:46.293 "dma_device_type": 1 00:05:46.293 }, 00:05:46.294 { 00:05:46.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.294 "dma_device_type": 2 00:05:46.294 } 00:05:46.294 ], 00:05:46.294 "driver_specific": {} 00:05:46.294 } 00:05:46.294 ]' 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.294 [2024-11-20 20:48:04.245770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:46.294 [2024-11-20 20:48:04.245874] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.294 [2024-11-20 20:48:04.245915] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:46.294 [2024-11-20 20:48:04.245927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.294 [2024-11-20 20:48:04.248571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.294 [2024-11-20 20:48:04.248634] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.294 Passthru0 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.294 { 00:05:46.294 "name": "Malloc0", 00:05:46.294 "aliases": [ 00:05:46.294 "93da2cd4-d690-437c-8cb9-e0649f8d284a" 00:05:46.294 ], 00:05:46.294 "product_name": "Malloc disk", 00:05:46.294 "block_size": 512, 00:05:46.294 "num_blocks": 16384, 00:05:46.294 "uuid": "93da2cd4-d690-437c-8cb9-e0649f8d284a", 00:05:46.294 "assigned_rate_limits": { 00:05:46.294 "rw_ios_per_sec": 0, 00:05:46.294 "rw_mbytes_per_sec": 0, 00:05:46.294 "r_mbytes_per_sec": 0, 00:05:46.294 "w_mbytes_per_sec": 0 00:05:46.294 }, 00:05:46.294 "claimed": true, 00:05:46.294 "claim_type": "exclusive_write", 00:05:46.294 "zoned": false, 00:05:46.294 "supported_io_types": { 00:05:46.294 "read": true, 00:05:46.294 "write": true, 00:05:46.294 "unmap": true, 00:05:46.294 "flush": true, 00:05:46.294 "reset": true, 00:05:46.294 "nvme_admin": false, 00:05:46.294 "nvme_io": false, 00:05:46.294 "nvme_io_md": false, 00:05:46.294 "write_zeroes": true, 00:05:46.294 "zcopy": true, 00:05:46.294 "get_zone_info": false, 00:05:46.294 "zone_management": false, 00:05:46.294 "zone_append": false, 00:05:46.294 "compare": false, 00:05:46.294 "compare_and_write": false, 00:05:46.294 "abort": true, 00:05:46.294 "seek_hole": false, 00:05:46.294 "seek_data": false, 00:05:46.294 "copy": true, 00:05:46.294 "nvme_iov_md": false 00:05:46.294 }, 00:05:46.294 "memory_domains": [ 00:05:46.294 { 00:05:46.294 "dma_device_id": "system", 00:05:46.294 "dma_device_type": 1 00:05:46.294 }, 00:05:46.294 { 00:05:46.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.294 "dma_device_type": 2 00:05:46.294 } 00:05:46.294 ], 00:05:46.294 "driver_specific": {} 00:05:46.294 }, 00:05:46.294 { 00:05:46.294 "name": "Passthru0", 00:05:46.294 "aliases": [ 00:05:46.294 "4c92f3d2-7da5-5913-a98d-7959ac4c7941" 00:05:46.294 ], 00:05:46.294 "product_name": "passthru", 00:05:46.294 "block_size": 512, 00:05:46.294 "num_blocks": 16384, 00:05:46.294 "uuid": "4c92f3d2-7da5-5913-a98d-7959ac4c7941", 00:05:46.294 "assigned_rate_limits": { 00:05:46.294 "rw_ios_per_sec": 0, 00:05:46.294 "rw_mbytes_per_sec": 0, 00:05:46.294 "r_mbytes_per_sec": 0, 00:05:46.294 "w_mbytes_per_sec": 0 00:05:46.294 }, 00:05:46.294 "claimed": false, 00:05:46.294 "zoned": false, 00:05:46.294 "supported_io_types": { 00:05:46.294 "read": true, 00:05:46.294 "write": true, 00:05:46.294 "unmap": true, 00:05:46.294 "flush": true, 00:05:46.294 "reset": true, 00:05:46.294 "nvme_admin": false, 00:05:46.294 "nvme_io": false, 00:05:46.294 "nvme_io_md": false, 00:05:46.294 "write_zeroes": true, 00:05:46.294 "zcopy": true, 00:05:46.294 "get_zone_info": false, 00:05:46.294 "zone_management": false, 00:05:46.294 "zone_append": false, 00:05:46.294 "compare": false, 00:05:46.294 "compare_and_write": false, 00:05:46.294 "abort": true, 00:05:46.294 "seek_hole": false, 00:05:46.294 "seek_data": false, 00:05:46.294 "copy": true, 00:05:46.294 "nvme_iov_md": false 00:05:46.294 }, 00:05:46.294 "memory_domains": [ 00:05:46.294 { 00:05:46.294 "dma_device_id": "system", 00:05:46.294 "dma_device_type": 1 00:05:46.294 }, 00:05:46.294 { 00:05:46.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.294 "dma_device_type": 2 00:05:46.294 } 00:05:46.294 ], 00:05:46.294 "driver_specific": { 00:05:46.294 "passthru": { 00:05:46.294 "name": "Passthru0", 00:05:46.294 "base_bdev_name": "Malloc0" 00:05:46.294 } 00:05:46.294 } 00:05:46.294 } 00:05:46.294 ]' 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:46.294 20:48:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.294 00:05:46.294 real 0m0.240s 00:05:46.294 user 0m0.134s 00:05:46.294 sys 0m0.040s 00:05:46.294 ************************************ 00:05:46.294 END TEST rpc_integrity 00:05:46.294 ************************************ 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.294 20:48:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.556 20:48:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:46.556 20:48:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.556 20:48:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.556 20:48:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.556 ************************************ 00:05:46.556 START TEST rpc_plugins 00:05:46.556 ************************************ 00:05:46.556 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:46.556 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.557 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:46.557 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.557 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:46.557 { 00:05:46.557 "name": "Malloc1", 00:05:46.557 "aliases": [ 00:05:46.557 "c57d55f6-9534-4ab8-bb02-db6ff6b9fd91" 00:05:46.557 ], 00:05:46.557 "product_name": "Malloc disk", 00:05:46.557 "block_size": 4096, 00:05:46.557 "num_blocks": 256, 00:05:46.557 "uuid": "c57d55f6-9534-4ab8-bb02-db6ff6b9fd91", 00:05:46.557 "assigned_rate_limits": { 00:05:46.557 "rw_ios_per_sec": 0, 00:05:46.557 "rw_mbytes_per_sec": 0, 00:05:46.557 "r_mbytes_per_sec": 0, 00:05:46.557 "w_mbytes_per_sec": 0 00:05:46.557 }, 00:05:46.557 "claimed": false, 00:05:46.557 "zoned": false, 00:05:46.557 "supported_io_types": { 00:05:46.557 "read": true, 00:05:46.557 "write": true, 00:05:46.557 "unmap": true, 00:05:46.557 "flush": true, 00:05:46.557 "reset": true, 00:05:46.557 "nvme_admin": false, 00:05:46.557 "nvme_io": false, 00:05:46.557 "nvme_io_md": false, 00:05:46.557 "write_zeroes": true, 00:05:46.557 "zcopy": true, 00:05:46.557 "get_zone_info": false, 00:05:46.557 "zone_management": false, 00:05:46.557 "zone_append": false, 00:05:46.557 "compare": false, 00:05:46.557 "compare_and_write": false, 00:05:46.557 "abort": true, 00:05:46.557 "seek_hole": false, 00:05:46.557 "seek_data": false, 00:05:46.557 "copy": true, 00:05:46.557 "nvme_iov_md": false 00:05:46.557 }, 00:05:46.557 "memory_domains": [ 00:05:46.557 { 00:05:46.557 "dma_device_id": "system", 00:05:46.557 "dma_device_type": 1 00:05:46.557 }, 00:05:46.557 { 00:05:46.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.557 "dma_device_type": 2 00:05:46.557 } 00:05:46.557 ], 00:05:46.557 "driver_specific": {} 00:05:46.557 } 00:05:46.557 ]' 00:05:46.557 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:46.557 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:46.557 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.557 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.557 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:46.557 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:46.557 20:48:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:46.557 00:05:46.557 real 0m0.124s 00:05:46.557 user 0m0.062s 00:05:46.557 sys 0m0.021s 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.557 ************************************ 00:05:46.557 END TEST rpc_plugins 00:05:46.557 ************************************ 00:05:46.557 20:48:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:46.557 20:48:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:46.557 20:48:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.557 20:48:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.557 20:48:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.557 ************************************ 00:05:46.557 START TEST rpc_trace_cmd_test 00:05:46.557 ************************************ 00:05:46.557 20:48:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:46.557 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:46.557 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.557 20:48:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.557 20:48:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.557 20:48:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.557 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:46.557 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69106", 00:05:46.557 "tpoint_group_mask": "0x8", 00:05:46.557 "iscsi_conn": { 00:05:46.557 "mask": "0x2", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "scsi": { 00:05:46.557 "mask": "0x4", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "bdev": { 00:05:46.557 "mask": "0x8", 00:05:46.557 "tpoint_mask": "0xffffffffffffffff" 00:05:46.557 }, 00:05:46.557 "nvmf_rdma": { 00:05:46.557 "mask": "0x10", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "nvmf_tcp": { 00:05:46.557 "mask": "0x20", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "ftl": { 00:05:46.557 "mask": "0x40", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "blobfs": { 00:05:46.557 "mask": "0x80", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "dsa": { 00:05:46.557 "mask": "0x200", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "thread": { 00:05:46.557 "mask": "0x400", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "nvme_pcie": { 00:05:46.557 "mask": "0x800", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "iaa": { 00:05:46.557 "mask": "0x1000", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "nvme_tcp": { 00:05:46.557 "mask": "0x2000", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "bdev_nvme": { 00:05:46.557 "mask": "0x4000", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "sock": { 00:05:46.557 "mask": "0x8000", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "blob": { 00:05:46.557 "mask": "0x10000", 00:05:46.557 "tpoint_mask": "0x0" 00:05:46.557 }, 00:05:46.557 "bdev_raid": { 00:05:46.557 "mask": "0x20000", 00:05:46.558 "tpoint_mask": "0x0" 00:05:46.558 }, 00:05:46.558 "scheduler": { 00:05:46.558 "mask": "0x40000", 00:05:46.558 "tpoint_mask": "0x0" 00:05:46.558 } 00:05:46.558 }' 00:05:46.558 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:46.820 00:05:46.820 real 0m0.153s 00:05:46.820 user 0m0.112s 00:05:46.820 sys 0m0.030s 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.820 ************************************ 00:05:46.820 END TEST rpc_trace_cmd_test 00:05:46.820 20:48:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:46.820 ************************************ 00:05:46.820 20:48:04 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:46.820 20:48:04 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:46.820 20:48:04 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:46.820 20:48:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.820 20:48:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.820 20:48:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.820 ************************************ 00:05:46.820 START TEST rpc_daemon_integrity 00:05:46.820 ************************************ 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.820 { 00:05:46.820 "name": "Malloc2", 00:05:46.820 "aliases": [ 00:05:46.820 "53b84c99-a95a-4b8a-b145-450df8fa8bb2" 00:05:46.820 ], 00:05:46.820 "product_name": "Malloc disk", 00:05:46.820 "block_size": 512, 00:05:46.820 "num_blocks": 16384, 00:05:46.820 "uuid": "53b84c99-a95a-4b8a-b145-450df8fa8bb2", 00:05:46.820 "assigned_rate_limits": { 00:05:46.820 "rw_ios_per_sec": 0, 00:05:46.820 "rw_mbytes_per_sec": 0, 00:05:46.820 "r_mbytes_per_sec": 0, 00:05:46.820 "w_mbytes_per_sec": 0 00:05:46.820 }, 00:05:46.820 "claimed": false, 00:05:46.820 "zoned": false, 00:05:46.820 "supported_io_types": { 00:05:46.820 "read": true, 00:05:46.820 "write": true, 00:05:46.820 "unmap": true, 00:05:46.820 "flush": true, 00:05:46.820 "reset": true, 00:05:46.820 "nvme_admin": false, 00:05:46.820 "nvme_io": false, 00:05:46.820 "nvme_io_md": false, 00:05:46.820 "write_zeroes": true, 00:05:46.820 "zcopy": true, 00:05:46.820 "get_zone_info": false, 00:05:46.820 "zone_management": false, 00:05:46.820 "zone_append": false, 00:05:46.820 "compare": false, 00:05:46.820 "compare_and_write": false, 00:05:46.820 "abort": true, 00:05:46.820 "seek_hole": false, 00:05:46.820 "seek_data": false, 00:05:46.820 "copy": true, 00:05:46.820 "nvme_iov_md": false 00:05:46.820 }, 00:05:46.820 "memory_domains": [ 00:05:46.820 { 00:05:46.820 "dma_device_id": "system", 00:05:46.820 "dma_device_type": 1 00:05:46.820 }, 00:05:46.820 { 00:05:46.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.820 "dma_device_type": 2 00:05:46.820 } 00:05:46.820 ], 00:05:46.820 "driver_specific": {} 00:05:46.820 } 00:05:46.820 ]' 00:05:46.820 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.083 [2024-11-20 20:48:04.971388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:47.083 [2024-11-20 20:48:04.971468] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.083 [2024-11-20 20:48:04.971494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:47.083 [2024-11-20 20:48:04.971504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.083 [2024-11-20 20:48:04.974155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.083 [2024-11-20 20:48:04.974212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.083 Passthru0 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.083 { 00:05:47.083 "name": "Malloc2", 00:05:47.083 "aliases": [ 00:05:47.083 "53b84c99-a95a-4b8a-b145-450df8fa8bb2" 00:05:47.083 ], 00:05:47.083 "product_name": "Malloc disk", 00:05:47.083 "block_size": 512, 00:05:47.083 "num_blocks": 16384, 00:05:47.083 "uuid": "53b84c99-a95a-4b8a-b145-450df8fa8bb2", 00:05:47.083 "assigned_rate_limits": { 00:05:47.083 "rw_ios_per_sec": 0, 00:05:47.083 "rw_mbytes_per_sec": 0, 00:05:47.083 "r_mbytes_per_sec": 0, 00:05:47.083 "w_mbytes_per_sec": 0 00:05:47.083 }, 00:05:47.083 "claimed": true, 00:05:47.083 "claim_type": "exclusive_write", 00:05:47.083 "zoned": false, 00:05:47.083 "supported_io_types": { 00:05:47.083 "read": true, 00:05:47.083 "write": true, 00:05:47.083 "unmap": true, 00:05:47.083 "flush": true, 00:05:47.083 "reset": true, 00:05:47.083 "nvme_admin": false, 00:05:47.083 "nvme_io": false, 00:05:47.083 "nvme_io_md": false, 00:05:47.083 "write_zeroes": true, 00:05:47.083 "zcopy": true, 00:05:47.083 "get_zone_info": false, 00:05:47.083 "zone_management": false, 00:05:47.083 "zone_append": false, 00:05:47.083 "compare": false, 00:05:47.083 "compare_and_write": false, 00:05:47.083 "abort": true, 00:05:47.083 "seek_hole": false, 00:05:47.083 "seek_data": false, 00:05:47.083 "copy": true, 00:05:47.083 "nvme_iov_md": false 00:05:47.083 }, 00:05:47.083 "memory_domains": [ 00:05:47.083 { 00:05:47.083 "dma_device_id": "system", 00:05:47.083 "dma_device_type": 1 00:05:47.083 }, 00:05:47.083 { 00:05:47.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.083 "dma_device_type": 2 00:05:47.083 } 00:05:47.083 ], 00:05:47.083 "driver_specific": {} 00:05:47.083 }, 00:05:47.083 { 00:05:47.083 "name": "Passthru0", 00:05:47.083 "aliases": [ 00:05:47.083 "3fc025dd-be4c-5b4d-8a9a-2a46752664b5" 00:05:47.083 ], 00:05:47.083 "product_name": "passthru", 00:05:47.083 "block_size": 512, 00:05:47.083 "num_blocks": 16384, 00:05:47.083 "uuid": "3fc025dd-be4c-5b4d-8a9a-2a46752664b5", 00:05:47.083 "assigned_rate_limits": { 00:05:47.083 "rw_ios_per_sec": 0, 00:05:47.083 "rw_mbytes_per_sec": 0, 00:05:47.083 "r_mbytes_per_sec": 0, 00:05:47.083 "w_mbytes_per_sec": 0 00:05:47.083 }, 00:05:47.083 "claimed": false, 00:05:47.083 "zoned": false, 00:05:47.083 "supported_io_types": { 00:05:47.083 "read": true, 00:05:47.083 "write": true, 00:05:47.083 "unmap": true, 00:05:47.083 "flush": true, 00:05:47.083 "reset": true, 00:05:47.083 "nvme_admin": false, 00:05:47.083 "nvme_io": false, 00:05:47.083 "nvme_io_md": false, 00:05:47.083 "write_zeroes": true, 00:05:47.083 "zcopy": true, 00:05:47.083 "get_zone_info": false, 00:05:47.083 "zone_management": false, 00:05:47.083 "zone_append": false, 00:05:47.083 "compare": false, 00:05:47.083 "compare_and_write": false, 00:05:47.083 "abort": true, 00:05:47.083 "seek_hole": false, 00:05:47.083 "seek_data": false, 00:05:47.083 "copy": true, 00:05:47.083 "nvme_iov_md": false 00:05:47.083 }, 00:05:47.083 "memory_domains": [ 00:05:47.083 { 00:05:47.083 "dma_device_id": "system", 00:05:47.083 "dma_device_type": 1 00:05:47.083 }, 00:05:47.083 { 00:05:47.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.083 "dma_device_type": 2 00:05:47.083 } 00:05:47.083 ], 00:05:47.083 "driver_specific": { 00:05:47.083 "passthru": { 00:05:47.083 "name": "Passthru0", 00:05:47.083 "base_bdev_name": "Malloc2" 00:05:47.083 } 00:05:47.083 } 00:05:47.083 } 00:05:47.083 ]' 00:05:47.083 20:48:04 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.083 20:48:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:47.084 20:48:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.084 00:05:47.084 real 0m0.237s 00:05:47.084 user 0m0.133s 00:05:47.084 sys 0m0.040s 00:05:47.084 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.084 20:48:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:47.084 ************************************ 00:05:47.084 END TEST rpc_daemon_integrity 00:05:47.084 ************************************ 00:05:47.084 20:48:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:47.084 20:48:05 rpc -- rpc/rpc.sh@84 -- # killprocess 69106 00:05:47.084 20:48:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 69106 ']' 00:05:47.084 20:48:05 rpc -- common/autotest_common.sh@958 -- # kill -0 69106 00:05:47.084 20:48:05 rpc -- common/autotest_common.sh@959 -- # uname 00:05:47.084 20:48:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.084 20:48:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69106 00:05:47.084 20:48:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.084 killing process with pid 69106 00:05:47.084 20:48:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.084 20:48:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69106' 00:05:47.084 20:48:05 rpc -- common/autotest_common.sh@973 -- # kill 69106 00:05:47.084 20:48:05 rpc -- common/autotest_common.sh@978 -- # wait 69106 00:05:47.656 00:05:47.656 real 0m2.409s 00:05:47.656 user 0m2.843s 00:05:47.656 sys 0m0.634s 00:05:47.656 20:48:05 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.656 20:48:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.656 ************************************ 00:05:47.656 END TEST rpc 00:05:47.656 ************************************ 00:05:47.656 20:48:05 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:47.656 20:48:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.657 20:48:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.657 20:48:05 -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 ************************************ 00:05:47.657 START TEST skip_rpc 00:05:47.657 ************************************ 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:47.657 * Looking for test storage... 00:05:47.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.657 20:48:05 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:47.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.657 --rc genhtml_branch_coverage=1 00:05:47.657 --rc genhtml_function_coverage=1 00:05:47.657 --rc genhtml_legend=1 00:05:47.657 --rc geninfo_all_blocks=1 00:05:47.657 --rc geninfo_unexecuted_blocks=1 00:05:47.657 00:05:47.657 ' 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:47.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.657 --rc genhtml_branch_coverage=1 00:05:47.657 --rc genhtml_function_coverage=1 00:05:47.657 --rc genhtml_legend=1 00:05:47.657 --rc geninfo_all_blocks=1 00:05:47.657 --rc geninfo_unexecuted_blocks=1 00:05:47.657 00:05:47.657 ' 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:47.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.657 --rc genhtml_branch_coverage=1 00:05:47.657 --rc genhtml_function_coverage=1 00:05:47.657 --rc genhtml_legend=1 00:05:47.657 --rc geninfo_all_blocks=1 00:05:47.657 --rc geninfo_unexecuted_blocks=1 00:05:47.657 00:05:47.657 ' 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:47.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.657 --rc genhtml_branch_coverage=1 00:05:47.657 --rc genhtml_function_coverage=1 00:05:47.657 --rc genhtml_legend=1 00:05:47.657 --rc geninfo_all_blocks=1 00:05:47.657 --rc geninfo_unexecuted_blocks=1 00:05:47.657 00:05:47.657 ' 00:05:47.657 20:48:05 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.657 20:48:05 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:47.657 20:48:05 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.657 20:48:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 ************************************ 00:05:47.657 START TEST skip_rpc 00:05:47.657 ************************************ 00:05:47.657 20:48:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:47.657 20:48:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69307 00:05:47.657 20:48:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.657 20:48:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:47.657 20:48:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:47.919 [2024-11-20 20:48:05.785404] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:05:47.919 [2024-11-20 20:48:05.785540] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69307 ] 00:05:47.919 [2024-11-20 20:48:05.934215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.919 [2024-11-20 20:48:05.963691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69307 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69307 ']' 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69307 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69307 00:05:53.199 killing process with pid 69307 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69307' 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69307 00:05:53.199 20:48:10 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69307 00:05:53.199 00:05:53.199 real 0m5.334s 00:05:53.199 user 0m4.918s 00:05:53.199 sys 0m0.311s 00:05:53.199 20:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.199 ************************************ 00:05:53.199 END TEST skip_rpc 00:05:53.199 ************************************ 00:05:53.199 20:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.199 20:48:11 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:53.199 20:48:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.199 20:48:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.199 20:48:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.199 ************************************ 00:05:53.199 START TEST skip_rpc_with_json 00:05:53.199 ************************************ 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:53.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69389 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69389 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69389 ']' 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:53.199 20:48:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.199 [2024-11-20 20:48:11.168045] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:05:53.199 [2024-11-20 20:48:11.168156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69389 ] 00:05:53.199 [2024-11-20 20:48:11.307071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.457 [2024-11-20 20:48:11.330002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.024 [2024-11-20 20:48:12.011862] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:54.024 request: 00:05:54.024 { 00:05:54.024 "trtype": "tcp", 00:05:54.024 "method": "nvmf_get_transports", 00:05:54.024 "req_id": 1 00:05:54.024 } 00:05:54.024 Got JSON-RPC error response 00:05:54.024 response: 00:05:54.024 { 00:05:54.024 "code": -19, 00:05:54.024 "message": "No such device" 00:05:54.024 } 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.024 [2024-11-20 20:48:12.019968] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.024 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:54.283 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:54.283 20:48:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:54.283 { 00:05:54.283 "subsystems": [ 00:05:54.283 { 00:05:54.283 "subsystem": "fsdev", 00:05:54.283 "config": [ 00:05:54.283 { 00:05:54.283 "method": "fsdev_set_opts", 00:05:54.283 "params": { 00:05:54.283 "fsdev_io_pool_size": 65535, 00:05:54.283 "fsdev_io_cache_size": 256 00:05:54.283 } 00:05:54.283 } 00:05:54.283 ] 00:05:54.283 }, 00:05:54.283 { 00:05:54.283 "subsystem": "keyring", 00:05:54.283 "config": [] 00:05:54.283 }, 00:05:54.283 { 00:05:54.283 "subsystem": "iobuf", 00:05:54.283 "config": [ 00:05:54.283 { 00:05:54.283 "method": "iobuf_set_options", 00:05:54.283 "params": { 00:05:54.283 "small_pool_count": 8192, 00:05:54.283 "large_pool_count": 1024, 00:05:54.283 "small_bufsize": 8192, 00:05:54.283 "large_bufsize": 135168, 00:05:54.283 "enable_numa": false 00:05:54.283 } 00:05:54.283 } 00:05:54.283 ] 00:05:54.283 }, 00:05:54.283 { 00:05:54.283 "subsystem": "sock", 00:05:54.283 "config": [ 00:05:54.283 { 00:05:54.283 "method": "sock_set_default_impl", 00:05:54.283 "params": { 00:05:54.283 "impl_name": "posix" 00:05:54.283 } 00:05:54.283 }, 00:05:54.283 { 00:05:54.283 "method": "sock_impl_set_options", 00:05:54.283 "params": { 00:05:54.283 "impl_name": "ssl", 00:05:54.283 "recv_buf_size": 4096, 00:05:54.283 "send_buf_size": 4096, 00:05:54.283 "enable_recv_pipe": true, 00:05:54.283 "enable_quickack": false, 00:05:54.283 "enable_placement_id": 0, 00:05:54.283 "enable_zerocopy_send_server": true, 00:05:54.283 "enable_zerocopy_send_client": false, 00:05:54.283 "zerocopy_threshold": 0, 00:05:54.283 "tls_version": 0, 00:05:54.283 "enable_ktls": false 00:05:54.283 } 00:05:54.283 }, 00:05:54.283 { 00:05:54.283 "method": "sock_impl_set_options", 00:05:54.283 "params": { 00:05:54.283 "impl_name": "posix", 00:05:54.283 "recv_buf_size": 2097152, 00:05:54.283 "send_buf_size": 2097152, 00:05:54.283 "enable_recv_pipe": true, 00:05:54.283 "enable_quickack": false, 00:05:54.283 "enable_placement_id": 0, 00:05:54.283 "enable_zerocopy_send_server": true, 00:05:54.284 "enable_zerocopy_send_client": false, 00:05:54.284 "zerocopy_threshold": 0, 00:05:54.284 "tls_version": 0, 00:05:54.284 "enable_ktls": false 00:05:54.284 } 00:05:54.284 } 00:05:54.284 ] 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "vmd", 00:05:54.284 "config": [] 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "accel", 00:05:54.284 "config": [ 00:05:54.284 { 00:05:54.284 "method": "accel_set_options", 00:05:54.284 "params": { 00:05:54.284 "small_cache_size": 128, 00:05:54.284 "large_cache_size": 16, 00:05:54.284 "task_count": 2048, 00:05:54.284 "sequence_count": 2048, 00:05:54.284 "buf_count": 2048 00:05:54.284 } 00:05:54.284 } 00:05:54.284 ] 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "bdev", 00:05:54.284 "config": [ 00:05:54.284 { 00:05:54.284 "method": "bdev_set_options", 00:05:54.284 "params": { 00:05:54.284 "bdev_io_pool_size": 65535, 00:05:54.284 "bdev_io_cache_size": 256, 00:05:54.284 "bdev_auto_examine": true, 00:05:54.284 "iobuf_small_cache_size": 128, 00:05:54.284 "iobuf_large_cache_size": 16 00:05:54.284 } 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "method": "bdev_raid_set_options", 00:05:54.284 "params": { 00:05:54.284 "process_window_size_kb": 1024, 00:05:54.284 "process_max_bandwidth_mb_sec": 0 00:05:54.284 } 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "method": "bdev_iscsi_set_options", 00:05:54.284 "params": { 00:05:54.284 "timeout_sec": 30 00:05:54.284 } 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "method": "bdev_nvme_set_options", 00:05:54.284 "params": { 00:05:54.284 "action_on_timeout": "none", 00:05:54.284 "timeout_us": 0, 00:05:54.284 "timeout_admin_us": 0, 00:05:54.284 "keep_alive_timeout_ms": 10000, 00:05:54.284 "arbitration_burst": 0, 00:05:54.284 "low_priority_weight": 0, 00:05:54.284 "medium_priority_weight": 0, 00:05:54.284 "high_priority_weight": 0, 00:05:54.284 "nvme_adminq_poll_period_us": 10000, 00:05:54.284 "nvme_ioq_poll_period_us": 0, 00:05:54.284 "io_queue_requests": 0, 00:05:54.284 "delay_cmd_submit": true, 00:05:54.284 "transport_retry_count": 4, 00:05:54.284 "bdev_retry_count": 3, 00:05:54.284 "transport_ack_timeout": 0, 00:05:54.284 "ctrlr_loss_timeout_sec": 0, 00:05:54.284 "reconnect_delay_sec": 0, 00:05:54.284 "fast_io_fail_timeout_sec": 0, 00:05:54.284 "disable_auto_failback": false, 00:05:54.284 "generate_uuids": false, 00:05:54.284 "transport_tos": 0, 00:05:54.284 "nvme_error_stat": false, 00:05:54.284 "rdma_srq_size": 0, 00:05:54.284 "io_path_stat": false, 00:05:54.284 "allow_accel_sequence": false, 00:05:54.284 "rdma_max_cq_size": 0, 00:05:54.284 "rdma_cm_event_timeout_ms": 0, 00:05:54.284 "dhchap_digests": [ 00:05:54.284 "sha256", 00:05:54.284 "sha384", 00:05:54.284 "sha512" 00:05:54.284 ], 00:05:54.284 "dhchap_dhgroups": [ 00:05:54.284 "null", 00:05:54.284 "ffdhe2048", 00:05:54.284 "ffdhe3072", 00:05:54.284 "ffdhe4096", 00:05:54.284 "ffdhe6144", 00:05:54.284 "ffdhe8192" 00:05:54.284 ] 00:05:54.284 } 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "method": "bdev_nvme_set_hotplug", 00:05:54.284 "params": { 00:05:54.284 "period_us": 100000, 00:05:54.284 "enable": false 00:05:54.284 } 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "method": "bdev_wait_for_examine" 00:05:54.284 } 00:05:54.284 ] 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "scsi", 00:05:54.284 "config": null 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "scheduler", 00:05:54.284 "config": [ 00:05:54.284 { 00:05:54.284 "method": "framework_set_scheduler", 00:05:54.284 "params": { 00:05:54.284 "name": "static" 00:05:54.284 } 00:05:54.284 } 00:05:54.284 ] 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "vhost_scsi", 00:05:54.284 "config": [] 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "vhost_blk", 00:05:54.284 "config": [] 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "ublk", 00:05:54.284 "config": [] 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "nbd", 00:05:54.284 "config": [] 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "nvmf", 00:05:54.284 "config": [ 00:05:54.284 { 00:05:54.284 "method": "nvmf_set_config", 00:05:54.284 "params": { 00:05:54.284 "discovery_filter": "match_any", 00:05:54.284 "admin_cmd_passthru": { 00:05:54.284 "identify_ctrlr": false 00:05:54.284 }, 00:05:54.284 "dhchap_digests": [ 00:05:54.284 "sha256", 00:05:54.284 "sha384", 00:05:54.284 "sha512" 00:05:54.284 ], 00:05:54.284 "dhchap_dhgroups": [ 00:05:54.284 "null", 00:05:54.284 "ffdhe2048", 00:05:54.284 "ffdhe3072", 00:05:54.284 "ffdhe4096", 00:05:54.284 "ffdhe6144", 00:05:54.284 "ffdhe8192" 00:05:54.284 ] 00:05:54.284 } 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "method": "nvmf_set_max_subsystems", 00:05:54.284 "params": { 00:05:54.284 "max_subsystems": 1024 00:05:54.284 } 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "method": "nvmf_set_crdt", 00:05:54.284 "params": { 00:05:54.284 "crdt1": 0, 00:05:54.284 "crdt2": 0, 00:05:54.284 "crdt3": 0 00:05:54.284 } 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "method": "nvmf_create_transport", 00:05:54.284 "params": { 00:05:54.284 "trtype": "TCP", 00:05:54.284 "max_queue_depth": 128, 00:05:54.284 "max_io_qpairs_per_ctrlr": 127, 00:05:54.284 "in_capsule_data_size": 4096, 00:05:54.284 "max_io_size": 131072, 00:05:54.284 "io_unit_size": 131072, 00:05:54.284 "max_aq_depth": 128, 00:05:54.284 "num_shared_buffers": 511, 00:05:54.284 "buf_cache_size": 4294967295, 00:05:54.284 "dif_insert_or_strip": false, 00:05:54.284 "zcopy": false, 00:05:54.284 "c2h_success": true, 00:05:54.284 "sock_priority": 0, 00:05:54.284 "abort_timeout_sec": 1, 00:05:54.284 "ack_timeout": 0, 00:05:54.284 "data_wr_pool_size": 0 00:05:54.284 } 00:05:54.284 } 00:05:54.284 ] 00:05:54.284 }, 00:05:54.284 { 00:05:54.284 "subsystem": "iscsi", 00:05:54.284 "config": [ 00:05:54.284 { 00:05:54.284 "method": "iscsi_set_options", 00:05:54.284 "params": { 00:05:54.284 "node_base": "iqn.2016-06.io.spdk", 00:05:54.284 "max_sessions": 128, 00:05:54.284 "max_connections_per_session": 2, 00:05:54.284 "max_queue_depth": 64, 00:05:54.284 "default_time2wait": 2, 00:05:54.284 "default_time2retain": 20, 00:05:54.284 "first_burst_length": 8192, 00:05:54.284 "immediate_data": true, 00:05:54.284 "allow_duplicated_isid": false, 00:05:54.284 "error_recovery_level": 0, 00:05:54.284 "nop_timeout": 60, 00:05:54.284 "nop_in_interval": 30, 00:05:54.284 "disable_chap": false, 00:05:54.284 "require_chap": false, 00:05:54.284 "mutual_chap": false, 00:05:54.284 "chap_group": 0, 00:05:54.284 "max_large_datain_per_connection": 64, 00:05:54.284 "max_r2t_per_connection": 4, 00:05:54.284 "pdu_pool_size": 36864, 00:05:54.284 "immediate_data_pool_size": 16384, 00:05:54.284 "data_out_pool_size": 2048 00:05:54.284 } 00:05:54.284 } 00:05:54.284 ] 00:05:54.284 } 00:05:54.284 ] 00:05:54.284 } 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69389 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69389 ']' 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69389 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69389 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.284 killing process with pid 69389 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69389' 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69389 00:05:54.284 20:48:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69389 00:05:54.543 20:48:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69418 00:05:54.543 20:48:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:54.543 20:48:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69418 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69418 ']' 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69418 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69418 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.810 killing process with pid 69418 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69418' 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69418 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69418 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:59.810 00:05:59.810 real 0m6.723s 00:05:59.810 user 0m6.344s 00:05:59.810 sys 0m0.604s 00:05:59.810 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.811 ************************************ 00:05:59.811 END TEST skip_rpc_with_json 00:05:59.811 ************************************ 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:59.811 20:48:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:59.811 20:48:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.811 20:48:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.811 20:48:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.811 ************************************ 00:05:59.811 START TEST skip_rpc_with_delay 00:05:59.811 ************************************ 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:59.811 20:48:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:00.072 [2024-11-20 20:48:17.965617] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:00.072 20:48:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:00.072 20:48:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.072 20:48:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.072 20:48:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.072 00:06:00.072 real 0m0.122s 00:06:00.072 user 0m0.068s 00:06:00.072 sys 0m0.052s 00:06:00.072 20:48:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.072 ************************************ 00:06:00.072 END TEST skip_rpc_with_delay 00:06:00.072 ************************************ 00:06:00.072 20:48:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:00.072 20:48:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:00.072 20:48:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:00.072 20:48:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:00.072 20:48:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.072 20:48:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.072 20:48:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.072 ************************************ 00:06:00.072 START TEST exit_on_failed_rpc_init 00:06:00.072 ************************************ 00:06:00.072 20:48:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:00.072 20:48:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69529 00:06:00.072 20:48:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69529 00:06:00.072 20:48:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69529 ']' 00:06:00.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.072 20:48:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.072 20:48:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.072 20:48:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.072 20:48:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.072 20:48:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.072 20:48:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:00.072 [2024-11-20 20:48:18.185870] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:00.072 [2024-11-20 20:48:18.186046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69529 ] 00:06:00.334 [2024-11-20 20:48:18.332005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.334 [2024-11-20 20:48:18.375046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:01.272 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:01.272 [2024-11-20 20:48:19.121787] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:01.272 [2024-11-20 20:48:19.121942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69547 ] 00:06:01.272 [2024-11-20 20:48:19.268812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.272 [2024-11-20 20:48:19.302194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.272 [2024-11-20 20:48:19.302330] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:01.272 [2024-11-20 20:48:19.302347] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:01.272 [2024-11-20 20:48:19.302366] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69529 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69529 ']' 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69529 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69529 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.532 killing process with pid 69529 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69529' 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69529 00:06:01.532 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69529 00:06:01.790 00:06:01.790 real 0m1.792s 00:06:01.790 user 0m1.786s 00:06:01.790 sys 0m0.592s 00:06:01.790 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.790 20:48:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:01.790 ************************************ 00:06:01.790 END TEST exit_on_failed_rpc_init 00:06:01.790 ************************************ 00:06:02.049 20:48:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.049 00:06:02.049 real 0m14.398s 00:06:02.049 user 0m13.250s 00:06:02.049 sys 0m1.769s 00:06:02.049 20:48:19 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.049 ************************************ 00:06:02.049 END TEST skip_rpc 00:06:02.049 ************************************ 00:06:02.049 20:48:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.049 20:48:19 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.049 20:48:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.049 20:48:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.049 20:48:19 -- common/autotest_common.sh@10 -- # set +x 00:06:02.049 ************************************ 00:06:02.049 START TEST rpc_client 00:06:02.049 ************************************ 00:06:02.049 20:48:19 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:02.049 * Looking for test storage... 00:06:02.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:02.049 20:48:20 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.049 20:48:20 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.049 20:48:20 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.049 20:48:20 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.049 20:48:20 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:02.049 20:48:20 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.049 20:48:20 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.049 --rc genhtml_branch_coverage=1 00:06:02.049 --rc genhtml_function_coverage=1 00:06:02.049 --rc genhtml_legend=1 00:06:02.049 --rc geninfo_all_blocks=1 00:06:02.049 --rc geninfo_unexecuted_blocks=1 00:06:02.049 00:06:02.049 ' 00:06:02.049 20:48:20 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.049 --rc genhtml_branch_coverage=1 00:06:02.049 --rc genhtml_function_coverage=1 00:06:02.049 --rc genhtml_legend=1 00:06:02.049 --rc geninfo_all_blocks=1 00:06:02.049 --rc geninfo_unexecuted_blocks=1 00:06:02.049 00:06:02.049 ' 00:06:02.049 20:48:20 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.049 --rc genhtml_branch_coverage=1 00:06:02.049 --rc genhtml_function_coverage=1 00:06:02.049 --rc genhtml_legend=1 00:06:02.049 --rc geninfo_all_blocks=1 00:06:02.049 --rc geninfo_unexecuted_blocks=1 00:06:02.049 00:06:02.049 ' 00:06:02.049 20:48:20 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.049 --rc genhtml_branch_coverage=1 00:06:02.049 --rc genhtml_function_coverage=1 00:06:02.049 --rc genhtml_legend=1 00:06:02.049 --rc geninfo_all_blocks=1 00:06:02.049 --rc geninfo_unexecuted_blocks=1 00:06:02.049 00:06:02.049 ' 00:06:02.049 20:48:20 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:02.049 OK 00:06:02.307 20:48:20 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:02.307 00:06:02.307 real 0m0.180s 00:06:02.307 user 0m0.099s 00:06:02.307 sys 0m0.087s 00:06:02.307 20:48:20 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.307 ************************************ 00:06:02.307 END TEST rpc_client 00:06:02.307 ************************************ 00:06:02.307 20:48:20 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:02.307 20:48:20 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.307 20:48:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.307 20:48:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.307 20:48:20 -- common/autotest_common.sh@10 -- # set +x 00:06:02.307 ************************************ 00:06:02.307 START TEST json_config 00:06:02.307 ************************************ 00:06:02.307 20:48:20 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:02.307 20:48:20 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.307 20:48:20 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.307 20:48:20 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.307 20:48:20 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.307 20:48:20 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.307 20:48:20 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.307 20:48:20 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.307 20:48:20 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.307 20:48:20 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.307 20:48:20 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.307 20:48:20 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.307 20:48:20 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.307 20:48:20 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.307 20:48:20 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.307 20:48:20 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.307 20:48:20 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:02.307 20:48:20 json_config -- scripts/common.sh@345 -- # : 1 00:06:02.307 20:48:20 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.307 20:48:20 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.307 20:48:20 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:02.307 20:48:20 json_config -- scripts/common.sh@353 -- # local d=1 00:06:02.307 20:48:20 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.307 20:48:20 json_config -- scripts/common.sh@355 -- # echo 1 00:06:02.307 20:48:20 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.307 20:48:20 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:02.307 20:48:20 json_config -- scripts/common.sh@353 -- # local d=2 00:06:02.307 20:48:20 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.307 20:48:20 json_config -- scripts/common.sh@355 -- # echo 2 00:06:02.307 20:48:20 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.307 20:48:20 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.307 20:48:20 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.308 20:48:20 json_config -- scripts/common.sh@368 -- # return 0 00:06:02.308 20:48:20 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.308 20:48:20 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.308 --rc genhtml_branch_coverage=1 00:06:02.308 --rc genhtml_function_coverage=1 00:06:02.308 --rc genhtml_legend=1 00:06:02.308 --rc geninfo_all_blocks=1 00:06:02.308 --rc geninfo_unexecuted_blocks=1 00:06:02.308 00:06:02.308 ' 00:06:02.308 20:48:20 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.308 --rc genhtml_branch_coverage=1 00:06:02.308 --rc genhtml_function_coverage=1 00:06:02.308 --rc genhtml_legend=1 00:06:02.308 --rc geninfo_all_blocks=1 00:06:02.308 --rc geninfo_unexecuted_blocks=1 00:06:02.308 00:06:02.308 ' 00:06:02.308 20:48:20 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.308 --rc genhtml_branch_coverage=1 00:06:02.308 --rc genhtml_function_coverage=1 00:06:02.308 --rc genhtml_legend=1 00:06:02.308 --rc geninfo_all_blocks=1 00:06:02.308 --rc geninfo_unexecuted_blocks=1 00:06:02.308 00:06:02.308 ' 00:06:02.308 20:48:20 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.308 --rc genhtml_branch_coverage=1 00:06:02.308 --rc genhtml_function_coverage=1 00:06:02.308 --rc genhtml_legend=1 00:06:02.308 --rc geninfo_all_blocks=1 00:06:02.308 --rc geninfo_unexecuted_blocks=1 00:06:02.308 00:06:02.308 ' 00:06:02.308 20:48:20 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:873dcbcf-2835-4028-b4fb-8081b83de7a7 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=873dcbcf-2835-4028-b4fb-8081b83de7a7 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.308 20:48:20 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.308 20:48:20 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.308 20:48:20 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.308 20:48:20 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.308 20:48:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.308 20:48:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.308 20:48:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.308 20:48:20 json_config -- paths/export.sh@5 -- # export PATH 00:06:02.308 20:48:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@51 -- # : 0 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.308 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.308 20:48:20 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.308 20:48:20 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:02.308 20:48:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:02.308 20:48:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:02.308 20:48:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:02.308 WARNING: No tests are enabled so not running JSON configuration tests 00:06:02.308 20:48:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:02.308 20:48:20 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:02.308 20:48:20 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:02.308 00:06:02.308 real 0m0.142s 00:06:02.308 user 0m0.095s 00:06:02.308 sys 0m0.050s 00:06:02.308 20:48:20 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.308 20:48:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.308 ************************************ 00:06:02.308 END TEST json_config 00:06:02.308 ************************************ 00:06:02.308 20:48:20 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:02.308 20:48:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.308 20:48:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.308 20:48:20 -- common/autotest_common.sh@10 -- # set +x 00:06:02.567 ************************************ 00:06:02.567 START TEST json_config_extra_key 00:06:02.567 ************************************ 00:06:02.567 20:48:20 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:02.567 20:48:20 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:02.567 20:48:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:02.567 20:48:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:02.567 20:48:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:02.567 20:48:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:02.567 20:48:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:02.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.567 --rc genhtml_branch_coverage=1 00:06:02.567 --rc genhtml_function_coverage=1 00:06:02.567 --rc genhtml_legend=1 00:06:02.567 --rc geninfo_all_blocks=1 00:06:02.567 --rc geninfo_unexecuted_blocks=1 00:06:02.567 00:06:02.567 ' 00:06:02.567 20:48:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:02.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.567 --rc genhtml_branch_coverage=1 00:06:02.567 --rc genhtml_function_coverage=1 00:06:02.567 --rc genhtml_legend=1 00:06:02.567 --rc geninfo_all_blocks=1 00:06:02.567 --rc geninfo_unexecuted_blocks=1 00:06:02.567 00:06:02.567 ' 00:06:02.567 20:48:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:02.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.567 --rc genhtml_branch_coverage=1 00:06:02.567 --rc genhtml_function_coverage=1 00:06:02.567 --rc genhtml_legend=1 00:06:02.567 --rc geninfo_all_blocks=1 00:06:02.567 --rc geninfo_unexecuted_blocks=1 00:06:02.567 00:06:02.567 ' 00:06:02.567 20:48:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:02.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:02.567 --rc genhtml_branch_coverage=1 00:06:02.567 --rc genhtml_function_coverage=1 00:06:02.567 --rc genhtml_legend=1 00:06:02.567 --rc geninfo_all_blocks=1 00:06:02.567 --rc geninfo_unexecuted_blocks=1 00:06:02.567 00:06:02.567 ' 00:06:02.567 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:873dcbcf-2835-4028-b4fb-8081b83de7a7 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=873dcbcf-2835-4028-b4fb-8081b83de7a7 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.567 20:48:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.567 20:48:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.567 20:48:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.567 20:48:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.567 20:48:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:02.567 20:48:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:02.567 20:48:20 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:02.568 20:48:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:02.568 20:48:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.568 20:48:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.568 20:48:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:02.568 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:02.568 20:48:20 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:02.568 20:48:20 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:02.568 20:48:20 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:02.568 INFO: launching applications... 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:02.568 20:48:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:02.568 Waiting for target to run... 00:06:02.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69730 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69730 /var/tmp/spdk_tgt.sock 00:06:02.568 20:48:20 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69730 ']' 00:06:02.568 20:48:20 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.568 20:48:20 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.568 20:48:20 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.568 20:48:20 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.568 20:48:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:02.568 20:48:20 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:02.568 [2024-11-20 20:48:20.647208] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:02.568 [2024-11-20 20:48:20.647498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69730 ] 00:06:03.134 [2024-11-20 20:48:20.950131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.134 [2024-11-20 20:48:20.965508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.392 20:48:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.392 20:48:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:03.392 20:48:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:03.392 00:06:03.392 20:48:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:03.392 INFO: shutting down applications... 00:06:03.392 20:48:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:03.392 20:48:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:03.392 20:48:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:03.392 20:48:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69730 ]] 00:06:03.393 20:48:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69730 00:06:03.393 20:48:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:03.393 20:48:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.393 20:48:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69730 00:06:03.393 20:48:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:03.959 20:48:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:03.959 20:48:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.959 20:48:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69730 00:06:03.959 20:48:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:03.959 20:48:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:03.959 20:48:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:03.959 20:48:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:03.959 SPDK target shutdown done 00:06:03.959 20:48:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:03.959 Success 00:06:03.959 ************************************ 00:06:03.959 END TEST json_config_extra_key 00:06:03.959 ************************************ 00:06:03.959 00:06:03.959 real 0m1.560s 00:06:03.959 user 0m1.258s 00:06:03.959 sys 0m0.352s 00:06:03.959 20:48:21 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.959 20:48:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:03.959 20:48:22 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.959 20:48:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.959 20:48:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.959 20:48:22 -- common/autotest_common.sh@10 -- # set +x 00:06:03.959 ************************************ 00:06:03.959 START TEST alias_rpc 00:06:03.959 ************************************ 00:06:03.959 20:48:22 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:04.218 * Looking for test storage... 00:06:04.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:04.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.218 20:48:22 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.218 --rc genhtml_branch_coverage=1 00:06:04.218 --rc genhtml_function_coverage=1 00:06:04.218 --rc genhtml_legend=1 00:06:04.218 --rc geninfo_all_blocks=1 00:06:04.218 --rc geninfo_unexecuted_blocks=1 00:06:04.218 00:06:04.218 ' 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.218 --rc genhtml_branch_coverage=1 00:06:04.218 --rc genhtml_function_coverage=1 00:06:04.218 --rc genhtml_legend=1 00:06:04.218 --rc geninfo_all_blocks=1 00:06:04.218 --rc geninfo_unexecuted_blocks=1 00:06:04.218 00:06:04.218 ' 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.218 --rc genhtml_branch_coverage=1 00:06:04.218 --rc genhtml_function_coverage=1 00:06:04.218 --rc genhtml_legend=1 00:06:04.218 --rc geninfo_all_blocks=1 00:06:04.218 --rc geninfo_unexecuted_blocks=1 00:06:04.218 00:06:04.218 ' 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.218 --rc genhtml_branch_coverage=1 00:06:04.218 --rc genhtml_function_coverage=1 00:06:04.218 --rc genhtml_legend=1 00:06:04.218 --rc geninfo_all_blocks=1 00:06:04.218 --rc geninfo_unexecuted_blocks=1 00:06:04.218 00:06:04.218 ' 00:06:04.218 20:48:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.218 20:48:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69803 00:06:04.218 20:48:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69803 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69803 ']' 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.218 20:48:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.218 20:48:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.218 [2024-11-20 20:48:22.271249] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:04.218 [2024-11-20 20:48:22.271867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69803 ] 00:06:04.476 [2024-11-20 20:48:22.418706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.476 [2024-11-20 20:48:22.439524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.041 20:48:23 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.041 20:48:23 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:05.041 20:48:23 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:05.299 20:48:23 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69803 00:06:05.299 20:48:23 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69803 ']' 00:06:05.299 20:48:23 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69803 00:06:05.299 20:48:23 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:05.299 20:48:23 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.299 20:48:23 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69803 00:06:05.299 20:48:23 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.299 20:48:23 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.299 20:48:23 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69803' 00:06:05.299 killing process with pid 69803 00:06:05.299 20:48:23 alias_rpc -- common/autotest_common.sh@973 -- # kill 69803 00:06:05.299 20:48:23 alias_rpc -- common/autotest_common.sh@978 -- # wait 69803 00:06:05.557 00:06:05.557 real 0m1.563s 00:06:05.557 user 0m1.691s 00:06:05.557 sys 0m0.372s 00:06:05.557 20:48:23 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.557 ************************************ 00:06:05.557 END TEST alias_rpc 00:06:05.557 ************************************ 00:06:05.557 20:48:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.557 20:48:23 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:05.557 20:48:23 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:05.557 20:48:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.557 20:48:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.557 20:48:23 -- common/autotest_common.sh@10 -- # set +x 00:06:05.815 ************************************ 00:06:05.815 START TEST spdkcli_tcp 00:06:05.815 ************************************ 00:06:05.815 20:48:23 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:05.815 * Looking for test storage... 00:06:05.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:05.815 20:48:23 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:05.815 20:48:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:05.815 20:48:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:05.815 20:48:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:05.815 20:48:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:05.816 20:48:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.816 20:48:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:05.816 20:48:23 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.816 20:48:23 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.816 20:48:23 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.816 20:48:23 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:05.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.816 --rc genhtml_branch_coverage=1 00:06:05.816 --rc genhtml_function_coverage=1 00:06:05.816 --rc genhtml_legend=1 00:06:05.816 --rc geninfo_all_blocks=1 00:06:05.816 --rc geninfo_unexecuted_blocks=1 00:06:05.816 00:06:05.816 ' 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:05.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.816 --rc genhtml_branch_coverage=1 00:06:05.816 --rc genhtml_function_coverage=1 00:06:05.816 --rc genhtml_legend=1 00:06:05.816 --rc geninfo_all_blocks=1 00:06:05.816 --rc geninfo_unexecuted_blocks=1 00:06:05.816 00:06:05.816 ' 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:05.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.816 --rc genhtml_branch_coverage=1 00:06:05.816 --rc genhtml_function_coverage=1 00:06:05.816 --rc genhtml_legend=1 00:06:05.816 --rc geninfo_all_blocks=1 00:06:05.816 --rc geninfo_unexecuted_blocks=1 00:06:05.816 00:06:05.816 ' 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:05.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.816 --rc genhtml_branch_coverage=1 00:06:05.816 --rc genhtml_function_coverage=1 00:06:05.816 --rc genhtml_legend=1 00:06:05.816 --rc geninfo_all_blocks=1 00:06:05.816 --rc geninfo_unexecuted_blocks=1 00:06:05.816 00:06:05.816 ' 00:06:05.816 20:48:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:05.816 20:48:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:05.816 20:48:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:05.816 20:48:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:05.816 20:48:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:05.816 20:48:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:05.816 20:48:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.816 20:48:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69883 00:06:05.816 20:48:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69883 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69883 ']' 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.816 20:48:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:05.816 20:48:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:05.816 [2024-11-20 20:48:23.883469] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:05.816 [2024-11-20 20:48:23.883587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69883 ] 00:06:06.073 [2024-11-20 20:48:24.030555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.073 [2024-11-20 20:48:24.051008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.073 [2024-11-20 20:48:24.051085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.638 20:48:24 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.639 20:48:24 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:06.639 20:48:24 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69900 00:06:06.639 20:48:24 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:06.639 20:48:24 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:06.897 [ 00:06:06.897 "bdev_malloc_delete", 00:06:06.897 "bdev_malloc_create", 00:06:06.897 "bdev_null_resize", 00:06:06.897 "bdev_null_delete", 00:06:06.897 "bdev_null_create", 00:06:06.897 "bdev_nvme_cuse_unregister", 00:06:06.897 "bdev_nvme_cuse_register", 00:06:06.897 "bdev_opal_new_user", 00:06:06.897 "bdev_opal_set_lock_state", 00:06:06.897 "bdev_opal_delete", 00:06:06.897 "bdev_opal_get_info", 00:06:06.897 "bdev_opal_create", 00:06:06.897 "bdev_nvme_opal_revert", 00:06:06.897 "bdev_nvme_opal_init", 00:06:06.897 "bdev_nvme_send_cmd", 00:06:06.897 "bdev_nvme_set_keys", 00:06:06.897 "bdev_nvme_get_path_iostat", 00:06:06.897 "bdev_nvme_get_mdns_discovery_info", 00:06:06.897 "bdev_nvme_stop_mdns_discovery", 00:06:06.897 "bdev_nvme_start_mdns_discovery", 00:06:06.897 "bdev_nvme_set_multipath_policy", 00:06:06.897 "bdev_nvme_set_preferred_path", 00:06:06.897 "bdev_nvme_get_io_paths", 00:06:06.897 "bdev_nvme_remove_error_injection", 00:06:06.897 "bdev_nvme_add_error_injection", 00:06:06.897 "bdev_nvme_get_discovery_info", 00:06:06.897 "bdev_nvme_stop_discovery", 00:06:06.897 "bdev_nvme_start_discovery", 00:06:06.897 "bdev_nvme_get_controller_health_info", 00:06:06.897 "bdev_nvme_disable_controller", 00:06:06.897 "bdev_nvme_enable_controller", 00:06:06.897 "bdev_nvme_reset_controller", 00:06:06.897 "bdev_nvme_get_transport_statistics", 00:06:06.897 "bdev_nvme_apply_firmware", 00:06:06.897 "bdev_nvme_detach_controller", 00:06:06.897 "bdev_nvme_get_controllers", 00:06:06.897 "bdev_nvme_attach_controller", 00:06:06.897 "bdev_nvme_set_hotplug", 00:06:06.897 "bdev_nvme_set_options", 00:06:06.897 "bdev_passthru_delete", 00:06:06.897 "bdev_passthru_create", 00:06:06.897 "bdev_lvol_set_parent_bdev", 00:06:06.897 "bdev_lvol_set_parent", 00:06:06.897 "bdev_lvol_check_shallow_copy", 00:06:06.897 "bdev_lvol_start_shallow_copy", 00:06:06.897 "bdev_lvol_grow_lvstore", 00:06:06.897 "bdev_lvol_get_lvols", 00:06:06.897 "bdev_lvol_get_lvstores", 00:06:06.897 "bdev_lvol_delete", 00:06:06.897 "bdev_lvol_set_read_only", 00:06:06.897 "bdev_lvol_resize", 00:06:06.897 "bdev_lvol_decouple_parent", 00:06:06.897 "bdev_lvol_inflate", 00:06:06.897 "bdev_lvol_rename", 00:06:06.897 "bdev_lvol_clone_bdev", 00:06:06.897 "bdev_lvol_clone", 00:06:06.897 "bdev_lvol_snapshot", 00:06:06.897 "bdev_lvol_create", 00:06:06.897 "bdev_lvol_delete_lvstore", 00:06:06.897 "bdev_lvol_rename_lvstore", 00:06:06.897 "bdev_lvol_create_lvstore", 00:06:06.897 "bdev_raid_set_options", 00:06:06.897 "bdev_raid_remove_base_bdev", 00:06:06.897 "bdev_raid_add_base_bdev", 00:06:06.897 "bdev_raid_delete", 00:06:06.897 "bdev_raid_create", 00:06:06.897 "bdev_raid_get_bdevs", 00:06:06.897 "bdev_error_inject_error", 00:06:06.897 "bdev_error_delete", 00:06:06.897 "bdev_error_create", 00:06:06.897 "bdev_split_delete", 00:06:06.897 "bdev_split_create", 00:06:06.897 "bdev_delay_delete", 00:06:06.897 "bdev_delay_create", 00:06:06.897 "bdev_delay_update_latency", 00:06:06.897 "bdev_zone_block_delete", 00:06:06.897 "bdev_zone_block_create", 00:06:06.897 "blobfs_create", 00:06:06.897 "blobfs_detect", 00:06:06.897 "blobfs_set_cache_size", 00:06:06.897 "bdev_xnvme_delete", 00:06:06.897 "bdev_xnvme_create", 00:06:06.897 "bdev_aio_delete", 00:06:06.897 "bdev_aio_rescan", 00:06:06.897 "bdev_aio_create", 00:06:06.897 "bdev_ftl_set_property", 00:06:06.897 "bdev_ftl_get_properties", 00:06:06.897 "bdev_ftl_get_stats", 00:06:06.897 "bdev_ftl_unmap", 00:06:06.897 "bdev_ftl_unload", 00:06:06.897 "bdev_ftl_delete", 00:06:06.897 "bdev_ftl_load", 00:06:06.897 "bdev_ftl_create", 00:06:06.897 "bdev_virtio_attach_controller", 00:06:06.897 "bdev_virtio_scsi_get_devices", 00:06:06.897 "bdev_virtio_detach_controller", 00:06:06.897 "bdev_virtio_blk_set_hotplug", 00:06:06.897 "bdev_iscsi_delete", 00:06:06.897 "bdev_iscsi_create", 00:06:06.897 "bdev_iscsi_set_options", 00:06:06.897 "accel_error_inject_error", 00:06:06.897 "ioat_scan_accel_module", 00:06:06.897 "dsa_scan_accel_module", 00:06:06.897 "iaa_scan_accel_module", 00:06:06.897 "keyring_file_remove_key", 00:06:06.897 "keyring_file_add_key", 00:06:06.897 "keyring_linux_set_options", 00:06:06.897 "fsdev_aio_delete", 00:06:06.897 "fsdev_aio_create", 00:06:06.897 "iscsi_get_histogram", 00:06:06.897 "iscsi_enable_histogram", 00:06:06.897 "iscsi_set_options", 00:06:06.897 "iscsi_get_auth_groups", 00:06:06.897 "iscsi_auth_group_remove_secret", 00:06:06.897 "iscsi_auth_group_add_secret", 00:06:06.897 "iscsi_delete_auth_group", 00:06:06.897 "iscsi_create_auth_group", 00:06:06.897 "iscsi_set_discovery_auth", 00:06:06.897 "iscsi_get_options", 00:06:06.898 "iscsi_target_node_request_logout", 00:06:06.898 "iscsi_target_node_set_redirect", 00:06:06.898 "iscsi_target_node_set_auth", 00:06:06.898 "iscsi_target_node_add_lun", 00:06:06.898 "iscsi_get_stats", 00:06:06.898 "iscsi_get_connections", 00:06:06.898 "iscsi_portal_group_set_auth", 00:06:06.898 "iscsi_start_portal_group", 00:06:06.898 "iscsi_delete_portal_group", 00:06:06.898 "iscsi_create_portal_group", 00:06:06.898 "iscsi_get_portal_groups", 00:06:06.898 "iscsi_delete_target_node", 00:06:06.898 "iscsi_target_node_remove_pg_ig_maps", 00:06:06.898 "iscsi_target_node_add_pg_ig_maps", 00:06:06.898 "iscsi_create_target_node", 00:06:06.898 "iscsi_get_target_nodes", 00:06:06.898 "iscsi_delete_initiator_group", 00:06:06.898 "iscsi_initiator_group_remove_initiators", 00:06:06.898 "iscsi_initiator_group_add_initiators", 00:06:06.898 "iscsi_create_initiator_group", 00:06:06.898 "iscsi_get_initiator_groups", 00:06:06.898 "nvmf_set_crdt", 00:06:06.898 "nvmf_set_config", 00:06:06.898 "nvmf_set_max_subsystems", 00:06:06.898 "nvmf_stop_mdns_prr", 00:06:06.898 "nvmf_publish_mdns_prr", 00:06:06.898 "nvmf_subsystem_get_listeners", 00:06:06.898 "nvmf_subsystem_get_qpairs", 00:06:06.898 "nvmf_subsystem_get_controllers", 00:06:06.898 "nvmf_get_stats", 00:06:06.898 "nvmf_get_transports", 00:06:06.898 "nvmf_create_transport", 00:06:06.898 "nvmf_get_targets", 00:06:06.898 "nvmf_delete_target", 00:06:06.898 "nvmf_create_target", 00:06:06.898 "nvmf_subsystem_allow_any_host", 00:06:06.898 "nvmf_subsystem_set_keys", 00:06:06.898 "nvmf_subsystem_remove_host", 00:06:06.898 "nvmf_subsystem_add_host", 00:06:06.898 "nvmf_ns_remove_host", 00:06:06.898 "nvmf_ns_add_host", 00:06:06.898 "nvmf_subsystem_remove_ns", 00:06:06.898 "nvmf_subsystem_set_ns_ana_group", 00:06:06.898 "nvmf_subsystem_add_ns", 00:06:06.898 "nvmf_subsystem_listener_set_ana_state", 00:06:06.898 "nvmf_discovery_get_referrals", 00:06:06.898 "nvmf_discovery_remove_referral", 00:06:06.898 "nvmf_discovery_add_referral", 00:06:06.898 "nvmf_subsystem_remove_listener", 00:06:06.898 "nvmf_subsystem_add_listener", 00:06:06.898 "nvmf_delete_subsystem", 00:06:06.898 "nvmf_create_subsystem", 00:06:06.898 "nvmf_get_subsystems", 00:06:06.898 "env_dpdk_get_mem_stats", 00:06:06.898 "nbd_get_disks", 00:06:06.898 "nbd_stop_disk", 00:06:06.898 "nbd_start_disk", 00:06:06.898 "ublk_recover_disk", 00:06:06.898 "ublk_get_disks", 00:06:06.898 "ublk_stop_disk", 00:06:06.898 "ublk_start_disk", 00:06:06.898 "ublk_destroy_target", 00:06:06.898 "ublk_create_target", 00:06:06.898 "virtio_blk_create_transport", 00:06:06.898 "virtio_blk_get_transports", 00:06:06.898 "vhost_controller_set_coalescing", 00:06:06.898 "vhost_get_controllers", 00:06:06.898 "vhost_delete_controller", 00:06:06.898 "vhost_create_blk_controller", 00:06:06.898 "vhost_scsi_controller_remove_target", 00:06:06.898 "vhost_scsi_controller_add_target", 00:06:06.898 "vhost_start_scsi_controller", 00:06:06.898 "vhost_create_scsi_controller", 00:06:06.898 "thread_set_cpumask", 00:06:06.898 "scheduler_set_options", 00:06:06.898 "framework_get_governor", 00:06:06.898 "framework_get_scheduler", 00:06:06.898 "framework_set_scheduler", 00:06:06.898 "framework_get_reactors", 00:06:06.898 "thread_get_io_channels", 00:06:06.898 "thread_get_pollers", 00:06:06.898 "thread_get_stats", 00:06:06.898 "framework_monitor_context_switch", 00:06:06.898 "spdk_kill_instance", 00:06:06.898 "log_enable_timestamps", 00:06:06.898 "log_get_flags", 00:06:06.898 "log_clear_flag", 00:06:06.898 "log_set_flag", 00:06:06.898 "log_get_level", 00:06:06.898 "log_set_level", 00:06:06.898 "log_get_print_level", 00:06:06.898 "log_set_print_level", 00:06:06.898 "framework_enable_cpumask_locks", 00:06:06.898 "framework_disable_cpumask_locks", 00:06:06.898 "framework_wait_init", 00:06:06.898 "framework_start_init", 00:06:06.898 "scsi_get_devices", 00:06:06.898 "bdev_get_histogram", 00:06:06.898 "bdev_enable_histogram", 00:06:06.898 "bdev_set_qos_limit", 00:06:06.898 "bdev_set_qd_sampling_period", 00:06:06.898 "bdev_get_bdevs", 00:06:06.898 "bdev_reset_iostat", 00:06:06.898 "bdev_get_iostat", 00:06:06.898 "bdev_examine", 00:06:06.898 "bdev_wait_for_examine", 00:06:06.898 "bdev_set_options", 00:06:06.898 "accel_get_stats", 00:06:06.898 "accel_set_options", 00:06:06.898 "accel_set_driver", 00:06:06.898 "accel_crypto_key_destroy", 00:06:06.898 "accel_crypto_keys_get", 00:06:06.898 "accel_crypto_key_create", 00:06:06.898 "accel_assign_opc", 00:06:06.898 "accel_get_module_info", 00:06:06.898 "accel_get_opc_assignments", 00:06:06.898 "vmd_rescan", 00:06:06.898 "vmd_remove_device", 00:06:06.898 "vmd_enable", 00:06:06.898 "sock_get_default_impl", 00:06:06.898 "sock_set_default_impl", 00:06:06.898 "sock_impl_set_options", 00:06:06.898 "sock_impl_get_options", 00:06:06.898 "iobuf_get_stats", 00:06:06.898 "iobuf_set_options", 00:06:06.898 "keyring_get_keys", 00:06:06.898 "framework_get_pci_devices", 00:06:06.898 "framework_get_config", 00:06:06.898 "framework_get_subsystems", 00:06:06.898 "fsdev_set_opts", 00:06:06.898 "fsdev_get_opts", 00:06:06.898 "trace_get_info", 00:06:06.898 "trace_get_tpoint_group_mask", 00:06:06.898 "trace_disable_tpoint_group", 00:06:06.898 "trace_enable_tpoint_group", 00:06:06.898 "trace_clear_tpoint_mask", 00:06:06.898 "trace_set_tpoint_mask", 00:06:06.898 "notify_get_notifications", 00:06:06.898 "notify_get_types", 00:06:06.898 "spdk_get_version", 00:06:06.898 "rpc_get_methods" 00:06:06.898 ] 00:06:06.898 20:48:24 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:06.898 20:48:24 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:06.898 20:48:24 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69883 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69883 ']' 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69883 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69883 00:06:06.898 killing process with pid 69883 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69883' 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69883 00:06:06.898 20:48:24 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69883 00:06:07.157 ************************************ 00:06:07.157 END TEST spdkcli_tcp 00:06:07.157 ************************************ 00:06:07.157 00:06:07.157 real 0m1.573s 00:06:07.157 user 0m2.818s 00:06:07.157 sys 0m0.403s 00:06:07.157 20:48:25 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.157 20:48:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:07.417 20:48:25 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.417 20:48:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.417 20:48:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.417 20:48:25 -- common/autotest_common.sh@10 -- # set +x 00:06:07.417 ************************************ 00:06:07.417 START TEST dpdk_mem_utility 00:06:07.417 ************************************ 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:07.417 * Looking for test storage... 00:06:07.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.417 20:48:25 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:07.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.417 --rc genhtml_branch_coverage=1 00:06:07.417 --rc genhtml_function_coverage=1 00:06:07.417 --rc genhtml_legend=1 00:06:07.417 --rc geninfo_all_blocks=1 00:06:07.417 --rc geninfo_unexecuted_blocks=1 00:06:07.417 00:06:07.417 ' 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:07.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.417 --rc genhtml_branch_coverage=1 00:06:07.417 --rc genhtml_function_coverage=1 00:06:07.417 --rc genhtml_legend=1 00:06:07.417 --rc geninfo_all_blocks=1 00:06:07.417 --rc geninfo_unexecuted_blocks=1 00:06:07.417 00:06:07.417 ' 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:07.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.417 --rc genhtml_branch_coverage=1 00:06:07.417 --rc genhtml_function_coverage=1 00:06:07.417 --rc genhtml_legend=1 00:06:07.417 --rc geninfo_all_blocks=1 00:06:07.417 --rc geninfo_unexecuted_blocks=1 00:06:07.417 00:06:07.417 ' 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:07.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.417 --rc genhtml_branch_coverage=1 00:06:07.417 --rc genhtml_function_coverage=1 00:06:07.417 --rc genhtml_legend=1 00:06:07.417 --rc geninfo_all_blocks=1 00:06:07.417 --rc geninfo_unexecuted_blocks=1 00:06:07.417 00:06:07.417 ' 00:06:07.417 20:48:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:07.417 20:48:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69977 00:06:07.417 20:48:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69977 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 69977 ']' 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.417 20:48:25 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:07.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.417 20:48:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:07.417 [2024-11-20 20:48:25.525437] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:07.417 [2024-11-20 20:48:25.525590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69977 ] 00:06:07.677 [2024-11-20 20:48:25.668385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.677 [2024-11-20 20:48:25.690804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.618 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.618 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:08.618 20:48:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:08.618 20:48:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:08.618 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.618 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.618 { 00:06:08.618 "filename": "/tmp/spdk_mem_dump.txt" 00:06:08.618 } 00:06:08.618 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.618 20:48:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:08.618 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:08.618 1 heaps totaling size 810.000000 MiB 00:06:08.618 size: 810.000000 MiB heap id: 0 00:06:08.618 end heaps---------- 00:06:08.618 9 mempools totaling size 595.772034 MiB 00:06:08.618 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:08.618 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:08.618 size: 92.545471 MiB name: bdev_io_69977 00:06:08.618 size: 50.003479 MiB name: msgpool_69977 00:06:08.618 size: 36.509338 MiB name: fsdev_io_69977 00:06:08.618 size: 21.763794 MiB name: PDU_Pool 00:06:08.618 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:08.618 size: 4.133484 MiB name: evtpool_69977 00:06:08.618 size: 0.026123 MiB name: Session_Pool 00:06:08.618 end mempools------- 00:06:08.618 6 memzones totaling size 4.142822 MiB 00:06:08.618 size: 1.000366 MiB name: RG_ring_0_69977 00:06:08.618 size: 1.000366 MiB name: RG_ring_1_69977 00:06:08.618 size: 1.000366 MiB name: RG_ring_4_69977 00:06:08.618 size: 1.000366 MiB name: RG_ring_5_69977 00:06:08.618 size: 0.125366 MiB name: RG_ring_2_69977 00:06:08.618 size: 0.015991 MiB name: RG_ring_3_69977 00:06:08.618 end memzones------- 00:06:08.618 20:48:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:08.618 heap id: 0 total size: 810.000000 MiB number of busy elements: 326 number of free elements: 15 00:06:08.618 list of free elements. size: 10.810852 MiB 00:06:08.618 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:08.618 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:08.618 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:08.618 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:08.618 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:08.618 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:08.618 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:08.618 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:08.618 element at address: 0x20001a600000 with size: 0.564575 MiB 00:06:08.618 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:08.618 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:08.618 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:08.618 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:08.618 element at address: 0x200027a00000 with size: 0.396484 MiB 00:06:08.618 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:08.618 list of standard malloc elements. size: 199.270264 MiB 00:06:08.618 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:08.618 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:08.618 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:08.618 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:08.618 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:08.618 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:08.618 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:08.618 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:08.618 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:08.618 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:08.618 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:08.618 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:08.619 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a690880 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a690940 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a690a00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a690ac0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a690b80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a690c40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a690d00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a690dc0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a690e80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a690f40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691000 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6910c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691180 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691240 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691300 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691480 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691540 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:08.619 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:08.620 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a65800 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a658c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6c4c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:08.620 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:08.620 list of memzone associated elements. size: 599.918884 MiB 00:06:08.620 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:08.620 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:08.620 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:08.620 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:08.620 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:08.620 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_69977_0 00:06:08.620 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:08.620 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69977_0 00:06:08.620 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:08.620 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69977_0 00:06:08.620 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:08.620 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:08.620 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:08.620 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:08.620 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:08.620 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69977_0 00:06:08.620 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:08.620 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69977 00:06:08.620 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:08.620 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69977 00:06:08.620 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:08.620 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:08.620 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:08.620 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:08.620 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:08.620 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:08.620 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:08.620 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:08.620 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:08.620 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69977 00:06:08.620 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:08.620 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69977 00:06:08.620 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:08.620 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69977 00:06:08.620 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:08.620 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69977 00:06:08.620 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:08.620 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69977 00:06:08.620 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:08.620 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69977 00:06:08.620 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:08.620 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:08.620 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:08.620 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:08.620 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:08.620 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:08.620 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:08.620 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69977 00:06:08.620 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:08.620 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69977 00:06:08.620 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:08.620 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:08.621 element at address: 0x200027a65980 with size: 0.023743 MiB 00:06:08.621 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:08.621 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:08.621 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69977 00:06:08.621 element at address: 0x200027a6bac0 with size: 0.002441 MiB 00:06:08.621 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:08.621 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:08.621 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69977 00:06:08.621 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:08.621 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69977 00:06:08.621 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:08.621 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69977 00:06:08.621 element at address: 0x200027a6c580 with size: 0.000305 MiB 00:06:08.621 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:08.621 20:48:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:08.621 20:48:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69977 00:06:08.621 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 69977 ']' 00:06:08.621 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 69977 00:06:08.621 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:08.621 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.621 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69977 00:06:08.621 killing process with pid 69977 00:06:08.621 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.621 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.621 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69977' 00:06:08.621 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 69977 00:06:08.621 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 69977 00:06:08.881 ************************************ 00:06:08.881 END TEST dpdk_mem_utility 00:06:08.881 ************************************ 00:06:08.881 00:06:08.881 real 0m1.565s 00:06:08.881 user 0m1.609s 00:06:08.881 sys 0m0.416s 00:06:08.881 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.881 20:48:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:08.881 20:48:26 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:08.881 20:48:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.881 20:48:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.881 20:48:26 -- common/autotest_common.sh@10 -- # set +x 00:06:08.881 ************************************ 00:06:08.881 START TEST event 00:06:08.881 ************************************ 00:06:08.881 20:48:26 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:09.142 * Looking for test storage... 00:06:09.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:09.142 20:48:27 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.142 20:48:27 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.142 20:48:27 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.142 20:48:27 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.142 20:48:27 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.142 20:48:27 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.142 20:48:27 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.142 20:48:27 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.142 20:48:27 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.142 20:48:27 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.142 20:48:27 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.142 20:48:27 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.142 20:48:27 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.142 20:48:27 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.142 20:48:27 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.142 20:48:27 event -- scripts/common.sh@344 -- # case "$op" in 00:06:09.142 20:48:27 event -- scripts/common.sh@345 -- # : 1 00:06:09.142 20:48:27 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.142 20:48:27 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.142 20:48:27 event -- scripts/common.sh@365 -- # decimal 1 00:06:09.142 20:48:27 event -- scripts/common.sh@353 -- # local d=1 00:06:09.142 20:48:27 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.142 20:48:27 event -- scripts/common.sh@355 -- # echo 1 00:06:09.142 20:48:27 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.142 20:48:27 event -- scripts/common.sh@366 -- # decimal 2 00:06:09.142 20:48:27 event -- scripts/common.sh@353 -- # local d=2 00:06:09.142 20:48:27 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.142 20:48:27 event -- scripts/common.sh@355 -- # echo 2 00:06:09.142 20:48:27 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.142 20:48:27 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.142 20:48:27 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.142 20:48:27 event -- scripts/common.sh@368 -- # return 0 00:06:09.142 20:48:27 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.142 20:48:27 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.142 --rc genhtml_branch_coverage=1 00:06:09.142 --rc genhtml_function_coverage=1 00:06:09.142 --rc genhtml_legend=1 00:06:09.142 --rc geninfo_all_blocks=1 00:06:09.142 --rc geninfo_unexecuted_blocks=1 00:06:09.142 00:06:09.142 ' 00:06:09.142 20:48:27 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.142 --rc genhtml_branch_coverage=1 00:06:09.142 --rc genhtml_function_coverage=1 00:06:09.142 --rc genhtml_legend=1 00:06:09.143 --rc geninfo_all_blocks=1 00:06:09.143 --rc geninfo_unexecuted_blocks=1 00:06:09.143 00:06:09.143 ' 00:06:09.143 20:48:27 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.143 --rc genhtml_branch_coverage=1 00:06:09.143 --rc genhtml_function_coverage=1 00:06:09.143 --rc genhtml_legend=1 00:06:09.143 --rc geninfo_all_blocks=1 00:06:09.143 --rc geninfo_unexecuted_blocks=1 00:06:09.143 00:06:09.143 ' 00:06:09.143 20:48:27 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.143 --rc genhtml_branch_coverage=1 00:06:09.143 --rc genhtml_function_coverage=1 00:06:09.143 --rc genhtml_legend=1 00:06:09.143 --rc geninfo_all_blocks=1 00:06:09.143 --rc geninfo_unexecuted_blocks=1 00:06:09.143 00:06:09.143 ' 00:06:09.143 20:48:27 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:09.143 20:48:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:09.143 20:48:27 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.143 20:48:27 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:09.143 20:48:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.143 20:48:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.143 ************************************ 00:06:09.143 START TEST event_perf 00:06:09.143 ************************************ 00:06:09.143 20:48:27 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:09.143 Running I/O for 1 seconds...[2024-11-20 20:48:27.132677] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:09.143 [2024-11-20 20:48:27.132964] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70058 ] 00:06:09.401 [2024-11-20 20:48:27.282199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.401 [2024-11-20 20:48:27.315481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.401 [2024-11-20 20:48:27.315947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.401 Running I/O for 1 seconds...[2024-11-20 20:48:27.315643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.401 [2024-11-20 20:48:27.316103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.339 00:06:10.339 lcore 0: 158859 00:06:10.339 lcore 1: 158859 00:06:10.339 lcore 2: 158861 00:06:10.339 lcore 3: 158864 00:06:10.339 done. 00:06:10.339 ************************************ 00:06:10.339 END TEST event_perf 00:06:10.339 ************************************ 00:06:10.339 00:06:10.339 real 0m1.276s 00:06:10.339 user 0m4.077s 00:06:10.339 sys 0m0.078s 00:06:10.339 20:48:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.339 20:48:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.339 20:48:28 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:10.339 20:48:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:10.339 20:48:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.339 20:48:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.339 ************************************ 00:06:10.339 START TEST event_reactor 00:06:10.339 ************************************ 00:06:10.339 20:48:28 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:10.600 [2024-11-20 20:48:28.483883] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:10.600 [2024-11-20 20:48:28.484181] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70097 ] 00:06:10.600 [2024-11-20 20:48:28.630778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.600 [2024-11-20 20:48:28.661710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.020 test_start 00:06:12.020 oneshot 00:06:12.020 tick 100 00:06:12.020 tick 100 00:06:12.020 tick 250 00:06:12.020 tick 100 00:06:12.020 tick 100 00:06:12.020 tick 100 00:06:12.020 tick 250 00:06:12.020 tick 500 00:06:12.020 tick 100 00:06:12.020 tick 100 00:06:12.020 tick 250 00:06:12.020 tick 100 00:06:12.020 tick 100 00:06:12.020 test_end 00:06:12.020 ************************************ 00:06:12.020 END TEST event_reactor 00:06:12.020 ************************************ 00:06:12.020 00:06:12.020 real 0m1.248s 00:06:12.020 user 0m1.073s 00:06:12.020 sys 0m0.066s 00:06:12.020 20:48:29 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.020 20:48:29 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:12.020 20:48:29 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.020 20:48:29 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:12.020 20:48:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.020 20:48:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.020 ************************************ 00:06:12.020 START TEST event_reactor_perf 00:06:12.020 ************************************ 00:06:12.020 20:48:29 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:12.020 [2024-11-20 20:48:29.788331] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:12.020 [2024-11-20 20:48:29.788440] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70134 ] 00:06:12.020 [2024-11-20 20:48:29.937166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.020 [2024-11-20 20:48:29.955372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.955 test_start 00:06:12.955 test_end 00:06:12.955 Performance: 315415 events per second 00:06:12.955 ************************************ 00:06:12.955 00:06:12.955 real 0m1.235s 00:06:12.955 user 0m1.072s 00:06:12.955 sys 0m0.056s 00:06:12.955 20:48:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.955 20:48:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.955 END TEST event_reactor_perf 00:06:12.955 ************************************ 00:06:12.955 20:48:31 event -- event/event.sh@49 -- # uname -s 00:06:12.955 20:48:31 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:12.955 20:48:31 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:12.955 20:48:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.955 20:48:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.955 20:48:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.955 ************************************ 00:06:12.955 START TEST event_scheduler 00:06:12.955 ************************************ 00:06:12.955 20:48:31 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:13.214 * Looking for test storage... 00:06:13.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.214 20:48:31 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.214 --rc genhtml_branch_coverage=1 00:06:13.214 --rc genhtml_function_coverage=1 00:06:13.214 --rc genhtml_legend=1 00:06:13.214 --rc geninfo_all_blocks=1 00:06:13.214 --rc geninfo_unexecuted_blocks=1 00:06:13.214 00:06:13.214 ' 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.214 --rc genhtml_branch_coverage=1 00:06:13.214 --rc genhtml_function_coverage=1 00:06:13.214 --rc genhtml_legend=1 00:06:13.214 --rc geninfo_all_blocks=1 00:06:13.214 --rc geninfo_unexecuted_blocks=1 00:06:13.214 00:06:13.214 ' 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.214 --rc genhtml_branch_coverage=1 00:06:13.214 --rc genhtml_function_coverage=1 00:06:13.214 --rc genhtml_legend=1 00:06:13.214 --rc geninfo_all_blocks=1 00:06:13.214 --rc geninfo_unexecuted_blocks=1 00:06:13.214 00:06:13.214 ' 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.214 --rc genhtml_branch_coverage=1 00:06:13.214 --rc genhtml_function_coverage=1 00:06:13.214 --rc genhtml_legend=1 00:06:13.214 --rc geninfo_all_blocks=1 00:06:13.214 --rc geninfo_unexecuted_blocks=1 00:06:13.214 00:06:13.214 ' 00:06:13.214 20:48:31 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:13.214 20:48:31 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70199 00:06:13.214 20:48:31 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.214 20:48:31 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70199 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70199 ']' 00:06:13.214 20:48:31 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.214 20:48:31 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.215 20:48:31 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.215 20:48:31 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.215 20:48:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.215 [2024-11-20 20:48:31.274479] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:13.215 [2024-11-20 20:48:31.274596] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70199 ] 00:06:13.473 [2024-11-20 20:48:31.419250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.473 [2024-11-20 20:48:31.441737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.473 [2024-11-20 20:48:31.441787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.473 [2024-11-20 20:48:31.442047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.473 [2024-11-20 20:48:31.441949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.473 20:48:31 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.473 20:48:31 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:13.473 20:48:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:13.473 20:48:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.473 20:48:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.473 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:13.473 POWER: Cannot set governor of lcore 0 to userspace 00:06:13.473 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:13.473 POWER: Cannot set governor of lcore 0 to performance 00:06:13.473 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:13.473 POWER: Cannot set governor of lcore 0 to userspace 00:06:13.473 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:13.473 POWER: Unable to set Power Management Environment for lcore 0 00:06:13.473 [2024-11-20 20:48:31.527626] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:13.473 [2024-11-20 20:48:31.527670] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:13.473 [2024-11-20 20:48:31.527701] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:13.473 [2024-11-20 20:48:31.527739] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:13.473 [2024-11-20 20:48:31.527799] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:13.473 [2024-11-20 20:48:31.527838] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:13.473 20:48:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.473 20:48:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:13.473 20:48:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.473 20:48:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.473 [2024-11-20 20:48:31.588685] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:13.731 20:48:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.731 20:48:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:13.731 20:48:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.731 20:48:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 ************************************ 00:06:13.732 START TEST scheduler_create_thread 00:06:13.732 ************************************ 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 2 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 3 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 4 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 5 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 6 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 7 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 8 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 9 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 10 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.732 20:48:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.112 20:48:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.112 20:48:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:15.112 20:48:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:15.112 20:48:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.112 20:48:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.489 20:48:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.489 00:06:16.489 real 0m2.614s 00:06:16.489 user 0m0.014s 00:06:16.489 sys 0m0.007s 00:06:16.489 ************************************ 00:06:16.489 END TEST scheduler_create_thread 00:06:16.489 ************************************ 00:06:16.489 20:48:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.489 20:48:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.489 20:48:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:16.489 20:48:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70199 00:06:16.489 20:48:34 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70199 ']' 00:06:16.489 20:48:34 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70199 00:06:16.489 20:48:34 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:16.489 20:48:34 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.489 20:48:34 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70199 00:06:16.489 killing process with pid 70199 00:06:16.489 20:48:34 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:16.489 20:48:34 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:16.489 20:48:34 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70199' 00:06:16.489 20:48:34 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70199 00:06:16.489 20:48:34 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70199 00:06:16.749 [2024-11-20 20:48:34.699903] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:16.749 00:06:16.749 real 0m3.777s 00:06:16.749 user 0m5.622s 00:06:16.749 sys 0m0.305s 00:06:16.749 ************************************ 00:06:16.749 END TEST event_scheduler 00:06:16.749 ************************************ 00:06:16.749 20:48:34 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.749 20:48:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.007 20:48:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:17.007 20:48:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:17.007 20:48:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.007 20:48:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.007 20:48:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.007 ************************************ 00:06:17.007 START TEST app_repeat 00:06:17.007 ************************************ 00:06:17.007 20:48:34 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:17.007 Process app_repeat pid: 70292 00:06:17.007 spdk_app_start Round 0 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70292 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70292' 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:17.007 20:48:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70292 /var/tmp/spdk-nbd.sock 00:06:17.007 20:48:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70292 ']' 00:06:17.008 20:48:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.008 20:48:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.008 20:48:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.008 20:48:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.008 20:48:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.008 [2024-11-20 20:48:34.938471] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:17.008 [2024-11-20 20:48:34.938583] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70292 ] 00:06:17.008 [2024-11-20 20:48:35.084457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.008 [2024-11-20 20:48:35.108693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.008 [2024-11-20 20:48:35.108801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.941 20:48:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.941 20:48:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:17.941 20:48:35 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.941 Malloc0 00:06:17.941 20:48:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.200 Malloc1 00:06:18.200 20:48:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.200 20:48:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:18.460 /dev/nbd0 00:06:18.460 20:48:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.460 20:48:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.460 20:48:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.461 1+0 records in 00:06:18.461 1+0 records out 00:06:18.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313502 s, 13.1 MB/s 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.461 20:48:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.461 20:48:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.461 20:48:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.461 20:48:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.721 /dev/nbd1 00:06:18.721 20:48:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.721 20:48:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.721 1+0 records in 00:06:18.721 1+0 records out 00:06:18.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398153 s, 10.3 MB/s 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.721 20:48:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:18.721 20:48:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.721 20:48:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.721 20:48:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.721 20:48:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.722 20:48:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.983 20:48:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.983 { 00:06:18.983 "nbd_device": "/dev/nbd0", 00:06:18.983 "bdev_name": "Malloc0" 00:06:18.983 }, 00:06:18.983 { 00:06:18.983 "nbd_device": "/dev/nbd1", 00:06:18.983 "bdev_name": "Malloc1" 00:06:18.983 } 00:06:18.983 ]' 00:06:18.983 20:48:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.983 { 00:06:18.983 "nbd_device": "/dev/nbd0", 00:06:18.983 "bdev_name": "Malloc0" 00:06:18.983 }, 00:06:18.983 { 00:06:18.983 "nbd_device": "/dev/nbd1", 00:06:18.983 "bdev_name": "Malloc1" 00:06:18.983 } 00:06:18.983 ]' 00:06:18.983 20:48:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.983 20:48:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.983 /dev/nbd1' 00:06:18.983 20:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.983 20:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.983 /dev/nbd1' 00:06:18.983 20:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.983 20:48:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.983 20:48:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.984 256+0 records in 00:06:18.984 256+0 records out 00:06:18.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100122 s, 105 MB/s 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.984 256+0 records in 00:06:18.984 256+0 records out 00:06:18.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144183 s, 72.7 MB/s 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.984 256+0 records in 00:06:18.984 256+0 records out 00:06:18.984 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206782 s, 50.7 MB/s 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.984 20:48:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.245 20:48:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.245 20:48:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.245 20:48:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.245 20:48:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.245 20:48:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.245 20:48:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.245 20:48:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.245 20:48:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.505 20:48:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.765 20:48:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.766 20:48:37 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.027 20:48:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:20.027 [2024-11-20 20:48:38.108494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.027 [2024-11-20 20:48:38.127001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.027 [2024-11-20 20:48:38.127154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.285 [2024-11-20 20:48:38.158364] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:20.285 [2024-11-20 20:48:38.158421] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.584 20:48:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.584 spdk_app_start Round 1 00:06:23.584 20:48:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:23.584 20:48:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70292 /var/tmp/spdk-nbd.sock 00:06:23.584 20:48:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70292 ']' 00:06:23.584 20:48:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.584 20:48:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:23.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.584 20:48:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.584 20:48:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:23.584 20:48:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.584 20:48:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.584 20:48:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:23.584 20:48:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.584 Malloc0 00:06:23.584 20:48:41 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.584 Malloc1 00:06:23.584 20:48:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.584 20:48:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.584 20:48:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.584 20:48:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.584 20:48:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.584 20:48:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.584 20:48:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.585 20:48:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.585 20:48:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.585 20:48:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.585 20:48:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.585 20:48:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.585 20:48:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:23.585 20:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.585 20:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.585 20:48:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.846 /dev/nbd0 00:06:23.846 20:48:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.846 20:48:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.846 1+0 records in 00:06:23.846 1+0 records out 00:06:23.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232953 s, 17.6 MB/s 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.846 20:48:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.846 20:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.846 20:48:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.846 20:48:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.107 /dev/nbd1 00:06:24.107 20:48:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.107 20:48:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.107 1+0 records in 00:06:24.107 1+0 records out 00:06:24.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027128 s, 15.1 MB/s 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:24.107 20:48:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:24.107 20:48:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.107 20:48:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.107 20:48:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.107 20:48:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.107 20:48:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.366 { 00:06:24.366 "nbd_device": "/dev/nbd0", 00:06:24.366 "bdev_name": "Malloc0" 00:06:24.366 }, 00:06:24.366 { 00:06:24.366 "nbd_device": "/dev/nbd1", 00:06:24.366 "bdev_name": "Malloc1" 00:06:24.366 } 00:06:24.366 ]' 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.366 { 00:06:24.366 "nbd_device": "/dev/nbd0", 00:06:24.366 "bdev_name": "Malloc0" 00:06:24.366 }, 00:06:24.366 { 00:06:24.366 "nbd_device": "/dev/nbd1", 00:06:24.366 "bdev_name": "Malloc1" 00:06:24.366 } 00:06:24.366 ]' 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.366 /dev/nbd1' 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.366 /dev/nbd1' 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.366 256+0 records in 00:06:24.366 256+0 records out 00:06:24.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00642239 s, 163 MB/s 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.366 256+0 records in 00:06:24.366 256+0 records out 00:06:24.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142655 s, 73.5 MB/s 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.366 256+0 records in 00:06:24.366 256+0 records out 00:06:24.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145743 s, 71.9 MB/s 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.366 20:48:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.633 20:48:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.633 20:48:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.633 20:48:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.633 20:48:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.633 20:48:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.633 20:48:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.633 20:48:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.633 20:48:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.633 20:48:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.633 20:48:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.909 20:48:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.170 20:48:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.170 20:48:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.430 20:48:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.430 [2024-11-20 20:48:43.376267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.431 [2024-11-20 20:48:43.392578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.431 [2024-11-20 20:48:43.392581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.431 [2024-11-20 20:48:43.423888] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.431 [2024-11-20 20:48:43.423932] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.731 spdk_app_start Round 2 00:06:28.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.731 20:48:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:28.731 20:48:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:28.731 20:48:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70292 /var/tmp/spdk-nbd.sock 00:06:28.731 20:48:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70292 ']' 00:06:28.731 20:48:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.731 20:48:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.731 20:48:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.731 20:48:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.731 20:48:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:28.731 20:48:46 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.731 20:48:46 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:28.731 20:48:46 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.731 Malloc0 00:06:28.731 20:48:46 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.991 Malloc1 00:06:28.991 20:48:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.991 20:48:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.250 /dev/nbd0 00:06:29.250 20:48:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.250 20:48:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.250 1+0 records in 00:06:29.250 1+0 records out 00:06:29.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034447 s, 11.9 MB/s 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.250 20:48:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.250 20:48:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.250 20:48:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.250 20:48:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.250 /dev/nbd1 00:06:29.508 20:48:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.508 20:48:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.508 1+0 records in 00:06:29.508 1+0 records out 00:06:29.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271231 s, 15.1 MB/s 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.508 20:48:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.508 20:48:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.508 20:48:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.509 20:48:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.509 20:48:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.509 20:48:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.509 20:48:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.509 { 00:06:29.509 "nbd_device": "/dev/nbd0", 00:06:29.509 "bdev_name": "Malloc0" 00:06:29.509 }, 00:06:29.509 { 00:06:29.509 "nbd_device": "/dev/nbd1", 00:06:29.509 "bdev_name": "Malloc1" 00:06:29.509 } 00:06:29.509 ]' 00:06:29.509 20:48:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.509 { 00:06:29.509 "nbd_device": "/dev/nbd0", 00:06:29.509 "bdev_name": "Malloc0" 00:06:29.509 }, 00:06:29.509 { 00:06:29.509 "nbd_device": "/dev/nbd1", 00:06:29.509 "bdev_name": "Malloc1" 00:06:29.509 } 00:06:29.509 ]' 00:06:29.509 20:48:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.767 /dev/nbd1' 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.767 /dev/nbd1' 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.767 256+0 records in 00:06:29.767 256+0 records out 00:06:29.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010655 s, 98.4 MB/s 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.767 256+0 records in 00:06:29.767 256+0 records out 00:06:29.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153146 s, 68.5 MB/s 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.767 256+0 records in 00:06:29.767 256+0 records out 00:06:29.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232473 s, 45.1 MB/s 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.767 20:48:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.025 20:48:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.025 20:48:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.025 20:48:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.025 20:48:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.025 20:48:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.025 20:48:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.025 20:48:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.025 20:48:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.025 20:48:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.025 20:48:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.025 20:48:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.025 20:48:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.025 20:48:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.026 20:48:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.026 20:48:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.026 20:48:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.026 20:48:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.026 20:48:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.026 20:48:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.026 20:48:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.026 20:48:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.284 20:48:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.284 20:48:48 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.541 20:48:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.799 [2024-11-20 20:48:48.704894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.799 [2024-11-20 20:48:48.727661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.799 [2024-11-20 20:48:48.727676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.799 [2024-11-20 20:48:48.770682] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.799 [2024-11-20 20:48:48.770734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:34.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.080 20:48:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70292 /var/tmp/spdk-nbd.sock 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70292 ']' 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:34.080 20:48:51 event.app_repeat -- event/event.sh@39 -- # killprocess 70292 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70292 ']' 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70292 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70292 00:06:34.080 killing process with pid 70292 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70292' 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70292 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70292 00:06:34.080 spdk_app_start is called in Round 0. 00:06:34.080 Shutdown signal received, stop current app iteration 00:06:34.080 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 reinitialization... 00:06:34.080 spdk_app_start is called in Round 1. 00:06:34.080 Shutdown signal received, stop current app iteration 00:06:34.080 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 reinitialization... 00:06:34.080 spdk_app_start is called in Round 2. 00:06:34.080 Shutdown signal received, stop current app iteration 00:06:34.080 Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 reinitialization... 00:06:34.080 spdk_app_start is called in Round 3. 00:06:34.080 Shutdown signal received, stop current app iteration 00:06:34.080 ************************************ 00:06:34.080 END TEST app_repeat 00:06:34.080 ************************************ 00:06:34.080 20:48:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:34.080 20:48:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:34.080 00:06:34.080 real 0m17.073s 00:06:34.080 user 0m38.186s 00:06:34.080 sys 0m2.130s 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.080 20:48:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.080 20:48:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:34.080 20:48:52 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:34.080 20:48:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.080 20:48:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.080 20:48:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.080 ************************************ 00:06:34.080 START TEST cpu_locks 00:06:34.080 ************************************ 00:06:34.080 20:48:52 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:34.080 * Looking for test storage... 00:06:34.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:34.080 20:48:52 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.080 20:48:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.080 20:48:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.080 20:48:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.080 20:48:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:34.080 20:48:52 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.080 20:48:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.081 --rc genhtml_branch_coverage=1 00:06:34.081 --rc genhtml_function_coverage=1 00:06:34.081 --rc genhtml_legend=1 00:06:34.081 --rc geninfo_all_blocks=1 00:06:34.081 --rc geninfo_unexecuted_blocks=1 00:06:34.081 00:06:34.081 ' 00:06:34.081 20:48:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.081 --rc genhtml_branch_coverage=1 00:06:34.081 --rc genhtml_function_coverage=1 00:06:34.081 --rc genhtml_legend=1 00:06:34.081 --rc geninfo_all_blocks=1 00:06:34.081 --rc geninfo_unexecuted_blocks=1 00:06:34.081 00:06:34.081 ' 00:06:34.081 20:48:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.081 --rc genhtml_branch_coverage=1 00:06:34.081 --rc genhtml_function_coverage=1 00:06:34.081 --rc genhtml_legend=1 00:06:34.081 --rc geninfo_all_blocks=1 00:06:34.081 --rc geninfo_unexecuted_blocks=1 00:06:34.081 00:06:34.081 ' 00:06:34.081 20:48:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.081 --rc genhtml_branch_coverage=1 00:06:34.081 --rc genhtml_function_coverage=1 00:06:34.081 --rc genhtml_legend=1 00:06:34.081 --rc geninfo_all_blocks=1 00:06:34.081 --rc geninfo_unexecuted_blocks=1 00:06:34.081 00:06:34.081 ' 00:06:34.081 20:48:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:34.081 20:48:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:34.081 20:48:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:34.081 20:48:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:34.081 20:48:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.081 20:48:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.081 20:48:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.081 ************************************ 00:06:34.081 START TEST default_locks 00:06:34.081 ************************************ 00:06:34.081 20:48:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:34.081 20:48:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70712 00:06:34.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.081 20:48:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70712 00:06:34.081 20:48:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70712 ']' 00:06:34.081 20:48:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.081 20:48:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.081 20:48:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.081 20:48:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.081 20:48:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.081 20:48:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.339 [2024-11-20 20:48:52.249855] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:34.339 [2024-11-20 20:48:52.250098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70712 ] 00:06:34.339 [2024-11-20 20:48:52.391488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.339 [2024-11-20 20:48:52.414800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70712 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70712 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70712 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70712 ']' 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70712 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.274 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70712 00:06:35.534 killing process with pid 70712 00:06:35.534 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.534 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.534 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70712' 00:06:35.534 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70712 00:06:35.534 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70712 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70712 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70712 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:35.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70712 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70712 ']' 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.797 ERROR: process (pid: 70712) is no longer running 00:06:35.797 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70712) - No such process 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:35.797 ************************************ 00:06:35.797 END TEST default_locks 00:06:35.797 ************************************ 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:35.797 00:06:35.797 real 0m1.499s 00:06:35.797 user 0m1.518s 00:06:35.797 sys 0m0.469s 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.797 20:48:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.797 20:48:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:35.797 20:48:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.797 20:48:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.797 20:48:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.797 ************************************ 00:06:35.797 START TEST default_locks_via_rpc 00:06:35.797 ************************************ 00:06:35.797 20:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:35.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.797 20:48:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70759 00:06:35.797 20:48:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70759 00:06:35.797 20:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70759 ']' 00:06:35.797 20:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.797 20:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.797 20:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.797 20:48:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.797 20:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.797 20:48:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.797 [2024-11-20 20:48:53.815819] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:35.797 [2024-11-20 20:48:53.815967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70759 ] 00:06:36.059 [2024-11-20 20:48:53.965094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.059 [2024-11-20 20:48:53.995198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.631 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.631 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:36.631 20:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:36.631 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.631 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70759 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70759 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70759 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70759 ']' 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70759 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70759 00:06:36.893 killing process with pid 70759 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70759' 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70759 00:06:36.893 20:48:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70759 00:06:37.464 ************************************ 00:06:37.464 END TEST default_locks_via_rpc 00:06:37.464 ************************************ 00:06:37.464 00:06:37.464 real 0m1.583s 00:06:37.464 user 0m1.624s 00:06:37.464 sys 0m0.500s 00:06:37.464 20:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.464 20:48:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.464 20:48:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:37.464 20:48:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.464 20:48:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.464 20:48:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.464 ************************************ 00:06:37.464 START TEST non_locking_app_on_locked_coremask 00:06:37.464 ************************************ 00:06:37.464 20:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:37.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.464 20:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70811 00:06:37.464 20:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70811 /var/tmp/spdk.sock 00:06:37.464 20:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70811 ']' 00:06:37.464 20:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.464 20:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.464 20:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.464 20:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.464 20:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.464 20:48:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.464 [2024-11-20 20:48:55.466729] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:37.464 [2024-11-20 20:48:55.466913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70811 ] 00:06:37.724 [2024-11-20 20:48:55.614320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.724 [2024-11-20 20:48:55.643539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70827 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70827 /var/tmp/spdk2.sock 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70827 ']' 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.296 20:48:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.296 [2024-11-20 20:48:56.385788] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:38.296 [2024-11-20 20:48:56.386228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70827 ] 00:06:38.556 [2024-11-20 20:48:56.546408] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.556 [2024-11-20 20:48:56.546500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.556 [2024-11-20 20:48:56.608624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70811 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70811 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70811 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70811 ']' 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70811 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70811 00:06:39.555 killing process with pid 70811 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70811' 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70811 00:06:39.555 20:48:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70811 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70827 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70827 ']' 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70827 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70827 00:06:40.500 killing process with pid 70827 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70827' 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70827 00:06:40.500 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70827 00:06:41.068 00:06:41.068 real 0m3.511s 00:06:41.068 user 0m3.832s 00:06:41.068 sys 0m0.843s 00:06:41.068 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.068 20:48:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.068 ************************************ 00:06:41.068 END TEST non_locking_app_on_locked_coremask 00:06:41.068 ************************************ 00:06:41.068 20:48:58 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.068 20:48:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:41.068 20:48:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.068 20:48:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.068 ************************************ 00:06:41.068 START TEST locking_app_on_unlocked_coremask 00:06:41.068 ************************************ 00:06:41.068 20:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:41.068 20:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70885 00:06:41.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.068 20:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70885 /var/tmp/spdk.sock 00:06:41.068 20:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70885 ']' 00:06:41.068 20:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.068 20:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.068 20:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.068 20:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.068 20:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.068 20:48:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.068 [2024-11-20 20:48:59.033038] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:41.068 [2024-11-20 20:48:59.033166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70885 ] 00:06:41.068 [2024-11-20 20:48:59.173978] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.068 [2024-11-20 20:48:59.174023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.325 [2024-11-20 20:48:59.197117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70901 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70901 /var/tmp/spdk2.sock 00:06:41.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70901 ']' 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.892 20:48:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.892 [2024-11-20 20:48:59.947937] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:41.892 [2024-11-20 20:48:59.948216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70901 ] 00:06:42.150 [2024-11-20 20:49:00.099151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.150 [2024-11-20 20:49:00.146700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.714 20:49:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.714 20:49:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.714 20:49:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70901 00:06:42.714 20:49:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70901 00:06:42.714 20:49:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70885 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70885 ']' 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70885 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70885 00:06:42.972 killing process with pid 70885 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70885' 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70885 00:06:42.972 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70885 00:06:43.535 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70901 00:06:43.535 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70901 ']' 00:06:43.535 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70901 00:06:43.535 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:43.535 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.535 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70901 00:06:43.793 killing process with pid 70901 00:06:43.793 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.793 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.793 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70901' 00:06:43.793 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70901 00:06:43.793 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70901 00:06:44.052 ************************************ 00:06:44.052 END TEST locking_app_on_unlocked_coremask 00:06:44.052 ************************************ 00:06:44.052 00:06:44.052 real 0m2.995s 00:06:44.052 user 0m3.234s 00:06:44.052 sys 0m0.813s 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.052 20:49:01 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:44.052 20:49:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.052 20:49:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.052 20:49:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.052 ************************************ 00:06:44.052 START TEST locking_app_on_locked_coremask 00:06:44.052 ************************************ 00:06:44.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70959 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70959 /var/tmp/spdk.sock 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70959 ']' 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.052 20:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.053 20:49:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.053 [2024-11-20 20:49:02.052849] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:44.053 [2024-11-20 20:49:02.052973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70959 ] 00:06:44.311 [2024-11-20 20:49:02.193604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.311 [2024-11-20 20:49:02.216114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70975 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70975 /var/tmp/spdk2.sock 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70975 /var/tmp/spdk2.sock 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70975 /var/tmp/spdk2.sock 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70975 ']' 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.876 20:49:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.876 [2024-11-20 20:49:02.948943] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:44.876 [2024-11-20 20:49:02.949483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70975 ] 00:06:45.134 [2024-11-20 20:49:03.104868] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70959 has claimed it. 00:06:45.134 [2024-11-20 20:49:03.104922] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:45.699 ERROR: process (pid: 70975) is no longer running 00:06:45.699 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70975) - No such process 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70959 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70959 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70959 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70959 ']' 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70959 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70959 00:06:45.699 killing process with pid 70959 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70959' 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70959 00:06:45.699 20:49:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70959 00:06:45.959 ************************************ 00:06:45.959 END TEST locking_app_on_locked_coremask 00:06:45.959 ************************************ 00:06:45.959 00:06:45.959 real 0m2.079s 00:06:45.959 user 0m2.284s 00:06:45.959 sys 0m0.510s 00:06:45.959 20:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.959 20:49:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.218 20:49:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:46.218 20:49:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:46.218 20:49:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.218 20:49:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.218 ************************************ 00:06:46.218 START TEST locking_overlapped_coremask 00:06:46.218 ************************************ 00:06:46.218 20:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:46.218 20:49:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71017 00:06:46.218 20:49:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71017 /var/tmp/spdk.sock 00:06:46.218 20:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71017 ']' 00:06:46.218 20:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.218 20:49:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:46.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.218 20:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.218 20:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.218 20:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.218 20:49:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.218 [2024-11-20 20:49:04.197930] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:46.218 [2024-11-20 20:49:04.198061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71017 ] 00:06:46.477 [2024-11-20 20:49:04.338880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.477 [2024-11-20 20:49:04.364811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.477 [2024-11-20 20:49:04.365061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.477 [2024-11-20 20:49:04.365084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71035 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71035 /var/tmp/spdk2.sock 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71035 /var/tmp/spdk2.sock 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71035 /var/tmp/spdk2.sock 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71035 ']' 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.046 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.046 [2024-11-20 20:49:05.102982] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:47.046 [2024-11-20 20:49:05.103305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71035 ] 00:06:47.306 [2024-11-20 20:49:05.267790] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71017 has claimed it. 00:06:47.306 [2024-11-20 20:49:05.267872] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.874 ERROR: process (pid: 71035) is no longer running 00:06:47.874 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71035) - No such process 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71017 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71017 ']' 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71017 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71017 00:06:47.874 killing process with pid 71017 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71017' 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71017 00:06:47.874 20:49:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71017 00:06:48.133 00:06:48.133 real 0m1.964s 00:06:48.133 user 0m5.378s 00:06:48.133 sys 0m0.447s 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.133 ************************************ 00:06:48.133 END TEST locking_overlapped_coremask 00:06:48.133 ************************************ 00:06:48.133 20:49:06 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:48.133 20:49:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.133 20:49:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.133 20:49:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.133 ************************************ 00:06:48.133 START TEST locking_overlapped_coremask_via_rpc 00:06:48.133 ************************************ 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:48.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71077 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71077 /var/tmp/spdk.sock 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71077 ']' 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.133 20:49:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.133 [2024-11-20 20:49:06.211947] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:48.133 [2024-11-20 20:49:06.212632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71077 ] 00:06:48.391 [2024-11-20 20:49:06.361486] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.391 [2024-11-20 20:49:06.361827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.391 [2024-11-20 20:49:06.391915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.392 [2024-11-20 20:49:06.392142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.392 [2024-11-20 20:49:06.392177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71095 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71095 /var/tmp/spdk2.sock 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71095 ']' 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.959 20:49:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.221 [2024-11-20 20:49:07.175689] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:49.221 [2024-11-20 20:49:07.176235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71095 ] 00:06:49.484 [2024-11-20 20:49:07.348647] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.484 [2024-11-20 20:49:07.348720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.484 [2024-11-20 20:49:07.425785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.484 [2024-11-20 20:49:07.425880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.484 [2024-11-20 20:49:07.425951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.052 [2024-11-20 20:49:08.034901] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71077 has claimed it. 00:06:50.052 request: 00:06:50.052 { 00:06:50.052 "method": "framework_enable_cpumask_locks", 00:06:50.052 "req_id": 1 00:06:50.052 } 00:06:50.052 Got JSON-RPC error response 00:06:50.052 response: 00:06:50.052 { 00:06:50.052 "code": -32603, 00:06:50.052 "message": "Failed to claim CPU core: 2" 00:06:50.052 } 00:06:50.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71077 /var/tmp/spdk.sock 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71077 ']' 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.052 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.310 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.310 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.310 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71095 /var/tmp/spdk2.sock 00:06:50.310 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71095 ']' 00:06:50.310 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.310 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.311 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.311 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.311 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.569 ************************************ 00:06:50.569 END TEST locking_overlapped_coremask_via_rpc 00:06:50.569 ************************************ 00:06:50.569 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.569 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.569 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:50.569 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.569 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.569 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.569 00:06:50.569 real 0m2.326s 00:06:50.569 user 0m1.123s 00:06:50.569 sys 0m0.123s 00:06:50.569 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.569 20:49:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.569 20:49:08 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:50.569 20:49:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71077 ]] 00:06:50.569 20:49:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71077 00:06:50.569 20:49:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71077 ']' 00:06:50.569 20:49:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71077 00:06:50.569 20:49:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:50.569 20:49:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.569 20:49:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71077 00:06:50.569 killing process with pid 71077 00:06:50.569 20:49:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.569 20:49:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.569 20:49:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71077' 00:06:50.569 20:49:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71077 00:06:50.569 20:49:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71077 00:06:50.827 20:49:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71095 ]] 00:06:50.827 20:49:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71095 00:06:50.827 20:49:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71095 ']' 00:06:50.827 20:49:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71095 00:06:50.827 20:49:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:50.827 20:49:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.827 20:49:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71095 00:06:50.827 killing process with pid 71095 00:06:50.827 20:49:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:50.827 20:49:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:50.827 20:49:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71095' 00:06:50.827 20:49:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71095 00:06:50.827 20:49:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71095 00:06:51.085 20:49:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.085 Process with pid 71077 is not found 00:06:51.085 20:49:09 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:51.085 20:49:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71077 ]] 00:06:51.085 20:49:09 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71077 00:06:51.085 20:49:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71077 ']' 00:06:51.085 20:49:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71077 00:06:51.085 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71077) - No such process 00:06:51.085 20:49:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71077 is not found' 00:06:51.085 20:49:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71095 ]] 00:06:51.085 20:49:09 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71095 00:06:51.085 20:49:09 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71095 ']' 00:06:51.085 20:49:09 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71095 00:06:51.085 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71095) - No such process 00:06:51.085 Process with pid 71095 is not found 00:06:51.085 20:49:09 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71095 is not found' 00:06:51.085 20:49:09 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.085 ************************************ 00:06:51.085 END TEST cpu_locks 00:06:51.085 ************************************ 00:06:51.085 00:06:51.085 real 0m17.117s 00:06:51.085 user 0m29.552s 00:06:51.085 sys 0m4.664s 00:06:51.085 20:49:09 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.085 20:49:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.085 ************************************ 00:06:51.085 END TEST event 00:06:51.085 ************************************ 00:06:51.085 00:06:51.085 real 0m42.248s 00:06:51.085 user 1m19.746s 00:06:51.085 sys 0m7.543s 00:06:51.085 20:49:09 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.085 20:49:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.342 20:49:09 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.342 20:49:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.342 20:49:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.342 20:49:09 -- common/autotest_common.sh@10 -- # set +x 00:06:51.342 ************************************ 00:06:51.342 START TEST thread 00:06:51.342 ************************************ 00:06:51.342 20:49:09 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:51.342 * Looking for test storage... 00:06:51.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:51.342 20:49:09 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:51.343 20:49:09 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.343 20:49:09 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.343 20:49:09 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.343 20:49:09 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.343 20:49:09 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.343 20:49:09 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.343 20:49:09 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.343 20:49:09 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.343 20:49:09 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.343 20:49:09 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.343 20:49:09 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.343 20:49:09 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:51.343 20:49:09 thread -- scripts/common.sh@345 -- # : 1 00:06:51.343 20:49:09 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.343 20:49:09 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.343 20:49:09 thread -- scripts/common.sh@365 -- # decimal 1 00:06:51.343 20:49:09 thread -- scripts/common.sh@353 -- # local d=1 00:06:51.343 20:49:09 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.343 20:49:09 thread -- scripts/common.sh@355 -- # echo 1 00:06:51.343 20:49:09 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.343 20:49:09 thread -- scripts/common.sh@366 -- # decimal 2 00:06:51.343 20:49:09 thread -- scripts/common.sh@353 -- # local d=2 00:06:51.343 20:49:09 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.343 20:49:09 thread -- scripts/common.sh@355 -- # echo 2 00:06:51.343 20:49:09 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.343 20:49:09 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.343 20:49:09 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.343 20:49:09 thread -- scripts/common.sh@368 -- # return 0 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:51.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.343 --rc genhtml_branch_coverage=1 00:06:51.343 --rc genhtml_function_coverage=1 00:06:51.343 --rc genhtml_legend=1 00:06:51.343 --rc geninfo_all_blocks=1 00:06:51.343 --rc geninfo_unexecuted_blocks=1 00:06:51.343 00:06:51.343 ' 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:51.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.343 --rc genhtml_branch_coverage=1 00:06:51.343 --rc genhtml_function_coverage=1 00:06:51.343 --rc genhtml_legend=1 00:06:51.343 --rc geninfo_all_blocks=1 00:06:51.343 --rc geninfo_unexecuted_blocks=1 00:06:51.343 00:06:51.343 ' 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:51.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.343 --rc genhtml_branch_coverage=1 00:06:51.343 --rc genhtml_function_coverage=1 00:06:51.343 --rc genhtml_legend=1 00:06:51.343 --rc geninfo_all_blocks=1 00:06:51.343 --rc geninfo_unexecuted_blocks=1 00:06:51.343 00:06:51.343 ' 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:51.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.343 --rc genhtml_branch_coverage=1 00:06:51.343 --rc genhtml_function_coverage=1 00:06:51.343 --rc genhtml_legend=1 00:06:51.343 --rc geninfo_all_blocks=1 00:06:51.343 --rc geninfo_unexecuted_blocks=1 00:06:51.343 00:06:51.343 ' 00:06:51.343 20:49:09 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.343 20:49:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.343 ************************************ 00:06:51.343 START TEST thread_poller_perf 00:06:51.343 ************************************ 00:06:51.343 20:49:09 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.343 [2024-11-20 20:49:09.426042] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:51.343 [2024-11-20 20:49:09.426260] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71222 ] 00:06:51.600 [2024-11-20 20:49:09.570909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.601 [2024-11-20 20:49:09.595095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.601 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:52.534 [2024-11-20T20:49:10.653Z] ====================================== 00:06:52.534 [2024-11-20T20:49:10.653Z] busy:2613549818 (cyc) 00:06:52.534 [2024-11-20T20:49:10.653Z] total_run_count: 306000 00:06:52.534 [2024-11-20T20:49:10.653Z] tsc_hz: 2600000000 (cyc) 00:06:52.534 [2024-11-20T20:49:10.653Z] ====================================== 00:06:52.534 [2024-11-20T20:49:10.653Z] poller_cost: 8541 (cyc), 3285 (nsec) 00:06:52.792 00:06:52.792 real 0m1.254s 00:06:52.792 user 0m1.095s 00:06:52.792 sys 0m0.051s 00:06:52.792 20:49:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.792 20:49:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:52.792 ************************************ 00:06:52.792 END TEST thread_poller_perf 00:06:52.792 ************************************ 00:06:52.792 20:49:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.792 20:49:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:52.792 20:49:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.792 20:49:10 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.792 ************************************ 00:06:52.792 START TEST thread_poller_perf 00:06:52.792 ************************************ 00:06:52.792 20:49:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:52.792 [2024-11-20 20:49:10.741887] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:52.792 [2024-11-20 20:49:10.742101] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71264 ] 00:06:52.792 [2024-11-20 20:49:10.888564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.050 [2024-11-20 20:49:10.911403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.050 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:53.990 [2024-11-20T20:49:12.109Z] ====================================== 00:06:53.990 [2024-11-20T20:49:12.109Z] busy:2603260288 (cyc) 00:06:53.990 [2024-11-20T20:49:12.109Z] total_run_count: 3971000 00:06:53.990 [2024-11-20T20:49:12.109Z] tsc_hz: 2600000000 (cyc) 00:06:53.990 [2024-11-20T20:49:12.109Z] ====================================== 00:06:53.990 [2024-11-20T20:49:12.109Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:53.990 00:06:53.990 real 0m1.263s 00:06:53.990 user 0m1.100s 00:06:53.990 sys 0m0.055s 00:06:53.990 20:49:11 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.990 20:49:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.990 ************************************ 00:06:53.990 END TEST thread_poller_perf 00:06:53.990 ************************************ 00:06:53.990 20:49:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:53.990 00:06:53.990 real 0m2.800s 00:06:53.990 user 0m2.302s 00:06:53.990 sys 0m0.239s 00:06:53.990 ************************************ 00:06:53.990 END TEST thread 00:06:53.990 ************************************ 00:06:53.990 20:49:12 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.990 20:49:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.990 20:49:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:53.990 20:49:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:53.990 20:49:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.990 20:49:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.990 20:49:12 -- common/autotest_common.sh@10 -- # set +x 00:06:53.990 ************************************ 00:06:53.990 START TEST app_cmdline 00:06:53.990 ************************************ 00:06:53.990 20:49:12 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:54.251 * Looking for test storage... 00:06:54.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:54.251 20:49:12 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.251 20:49:12 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.251 20:49:12 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.251 20:49:12 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:54.251 20:49:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.252 20:49:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:54.252 20:49:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.252 20:49:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:54.252 20:49:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:54.252 20:49:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.252 20:49:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:54.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.252 20:49:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.252 20:49:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.252 20:49:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.252 20:49:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.252 --rc genhtml_branch_coverage=1 00:06:54.252 --rc genhtml_function_coverage=1 00:06:54.252 --rc genhtml_legend=1 00:06:54.252 --rc geninfo_all_blocks=1 00:06:54.252 --rc geninfo_unexecuted_blocks=1 00:06:54.252 00:06:54.252 ' 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.252 --rc genhtml_branch_coverage=1 00:06:54.252 --rc genhtml_function_coverage=1 00:06:54.252 --rc genhtml_legend=1 00:06:54.252 --rc geninfo_all_blocks=1 00:06:54.252 --rc geninfo_unexecuted_blocks=1 00:06:54.252 00:06:54.252 ' 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.252 --rc genhtml_branch_coverage=1 00:06:54.252 --rc genhtml_function_coverage=1 00:06:54.252 --rc genhtml_legend=1 00:06:54.252 --rc geninfo_all_blocks=1 00:06:54.252 --rc geninfo_unexecuted_blocks=1 00:06:54.252 00:06:54.252 ' 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.252 --rc genhtml_branch_coverage=1 00:06:54.252 --rc genhtml_function_coverage=1 00:06:54.252 --rc genhtml_legend=1 00:06:54.252 --rc geninfo_all_blocks=1 00:06:54.252 --rc geninfo_unexecuted_blocks=1 00:06:54.252 00:06:54.252 ' 00:06:54.252 20:49:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:54.252 20:49:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71342 00:06:54.252 20:49:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71342 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71342 ']' 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.252 20:49:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:54.252 20:49:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:54.252 [2024-11-20 20:49:12.333759] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:54.252 [2024-11-20 20:49:12.333904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71342 ] 00:06:54.514 [2024-11-20 20:49:12.482213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.514 [2024-11-20 20:49:12.519995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.113 20:49:13 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.113 20:49:13 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:55.113 20:49:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:55.375 { 00:06:55.375 "version": "SPDK v25.01-pre git sha1 557f022f6", 00:06:55.375 "fields": { 00:06:55.375 "major": 25, 00:06:55.375 "minor": 1, 00:06:55.375 "patch": 0, 00:06:55.375 "suffix": "-pre", 00:06:55.375 "commit": "557f022f6" 00:06:55.375 } 00:06:55.375 } 00:06:55.375 20:49:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:55.375 20:49:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:55.375 20:49:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:55.375 20:49:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:55.375 20:49:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:55.375 20:49:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:55.375 20:49:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.375 20:49:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:55.375 20:49:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:55.375 20:49:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:55.375 20:49:13 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:55.636 request: 00:06:55.636 { 00:06:55.636 "method": "env_dpdk_get_mem_stats", 00:06:55.636 "req_id": 1 00:06:55.636 } 00:06:55.636 Got JSON-RPC error response 00:06:55.636 response: 00:06:55.636 { 00:06:55.636 "code": -32601, 00:06:55.636 "message": "Method not found" 00:06:55.636 } 00:06:55.636 20:49:13 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:55.636 20:49:13 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:55.636 20:49:13 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:55.636 20:49:13 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:55.636 20:49:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71342 00:06:55.636 20:49:13 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71342 ']' 00:06:55.637 20:49:13 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71342 00:06:55.637 20:49:13 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:55.637 20:49:13 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.637 20:49:13 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71342 00:06:55.637 killing process with pid 71342 00:06:55.637 20:49:13 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.637 20:49:13 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.637 20:49:13 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71342' 00:06:55.637 20:49:13 app_cmdline -- common/autotest_common.sh@973 -- # kill 71342 00:06:55.637 20:49:13 app_cmdline -- common/autotest_common.sh@978 -- # wait 71342 00:06:55.898 ************************************ 00:06:55.898 END TEST app_cmdline 00:06:55.898 ************************************ 00:06:55.898 00:06:55.898 real 0m1.842s 00:06:55.898 user 0m2.039s 00:06:55.898 sys 0m0.538s 00:06:55.898 20:49:13 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.898 20:49:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.898 20:49:13 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.898 20:49:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.898 20:49:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.898 20:49:13 -- common/autotest_common.sh@10 -- # set +x 00:06:55.898 ************************************ 00:06:55.898 START TEST version 00:06:55.898 ************************************ 00:06:55.898 20:49:13 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:56.160 * Looking for test storage... 00:06:56.160 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:56.160 20:49:14 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.160 20:49:14 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.160 20:49:14 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.160 20:49:14 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.160 20:49:14 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.160 20:49:14 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.160 20:49:14 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.160 20:49:14 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.160 20:49:14 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.160 20:49:14 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.160 20:49:14 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.160 20:49:14 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.160 20:49:14 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.160 20:49:14 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.160 20:49:14 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.160 20:49:14 version -- scripts/common.sh@344 -- # case "$op" in 00:06:56.160 20:49:14 version -- scripts/common.sh@345 -- # : 1 00:06:56.160 20:49:14 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.160 20:49:14 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.160 20:49:14 version -- scripts/common.sh@365 -- # decimal 1 00:06:56.160 20:49:14 version -- scripts/common.sh@353 -- # local d=1 00:06:56.160 20:49:14 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.160 20:49:14 version -- scripts/common.sh@355 -- # echo 1 00:06:56.160 20:49:14 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.160 20:49:14 version -- scripts/common.sh@366 -- # decimal 2 00:06:56.160 20:49:14 version -- scripts/common.sh@353 -- # local d=2 00:06:56.160 20:49:14 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.160 20:49:14 version -- scripts/common.sh@355 -- # echo 2 00:06:56.160 20:49:14 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.160 20:49:14 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.160 20:49:14 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.160 20:49:14 version -- scripts/common.sh@368 -- # return 0 00:06:56.160 20:49:14 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.160 20:49:14 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.160 --rc genhtml_branch_coverage=1 00:06:56.160 --rc genhtml_function_coverage=1 00:06:56.160 --rc genhtml_legend=1 00:06:56.160 --rc geninfo_all_blocks=1 00:06:56.160 --rc geninfo_unexecuted_blocks=1 00:06:56.160 00:06:56.160 ' 00:06:56.160 20:49:14 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.160 --rc genhtml_branch_coverage=1 00:06:56.160 --rc genhtml_function_coverage=1 00:06:56.160 --rc genhtml_legend=1 00:06:56.160 --rc geninfo_all_blocks=1 00:06:56.160 --rc geninfo_unexecuted_blocks=1 00:06:56.160 00:06:56.160 ' 00:06:56.160 20:49:14 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.160 --rc genhtml_branch_coverage=1 00:06:56.160 --rc genhtml_function_coverage=1 00:06:56.160 --rc genhtml_legend=1 00:06:56.160 --rc geninfo_all_blocks=1 00:06:56.160 --rc geninfo_unexecuted_blocks=1 00:06:56.160 00:06:56.160 ' 00:06:56.160 20:49:14 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.160 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.160 --rc genhtml_branch_coverage=1 00:06:56.160 --rc genhtml_function_coverage=1 00:06:56.160 --rc genhtml_legend=1 00:06:56.160 --rc geninfo_all_blocks=1 00:06:56.160 --rc geninfo_unexecuted_blocks=1 00:06:56.160 00:06:56.160 ' 00:06:56.160 20:49:14 version -- app/version.sh@17 -- # get_header_version major 00:06:56.160 20:49:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.160 20:49:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.160 20:49:14 version -- app/version.sh@14 -- # cut -f2 00:06:56.160 20:49:14 version -- app/version.sh@17 -- # major=25 00:06:56.160 20:49:14 version -- app/version.sh@18 -- # get_header_version minor 00:06:56.160 20:49:14 version -- app/version.sh@14 -- # cut -f2 00:06:56.160 20:49:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.160 20:49:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.160 20:49:14 version -- app/version.sh@18 -- # minor=1 00:06:56.160 20:49:14 version -- app/version.sh@19 -- # get_header_version patch 00:06:56.160 20:49:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.160 20:49:14 version -- app/version.sh@14 -- # cut -f2 00:06:56.160 20:49:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.160 20:49:14 version -- app/version.sh@19 -- # patch=0 00:06:56.160 20:49:14 version -- app/version.sh@20 -- # get_header_version suffix 00:06:56.160 20:49:14 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:56.160 20:49:14 version -- app/version.sh@14 -- # tr -d '"' 00:06:56.160 20:49:14 version -- app/version.sh@14 -- # cut -f2 00:06:56.160 20:49:14 version -- app/version.sh@20 -- # suffix=-pre 00:06:56.160 20:49:14 version -- app/version.sh@22 -- # version=25.1 00:06:56.160 20:49:14 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:56.160 20:49:14 version -- app/version.sh@28 -- # version=25.1rc0 00:06:56.160 20:49:14 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:56.160 20:49:14 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:56.160 20:49:14 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:56.160 20:49:14 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:56.160 ************************************ 00:06:56.160 END TEST version 00:06:56.160 ************************************ 00:06:56.160 00:06:56.160 real 0m0.196s 00:06:56.160 user 0m0.133s 00:06:56.160 sys 0m0.088s 00:06:56.160 20:49:14 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.160 20:49:14 version -- common/autotest_common.sh@10 -- # set +x 00:06:56.160 20:49:14 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:56.160 20:49:14 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:56.160 20:49:14 -- spdk/autotest.sh@194 -- # uname -s 00:06:56.160 20:49:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:56.160 20:49:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:56.160 20:49:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:56.160 20:49:14 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:56.160 20:49:14 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:56.160 20:49:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:56.161 20:49:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.161 20:49:14 -- common/autotest_common.sh@10 -- # set +x 00:06:56.161 ************************************ 00:06:56.161 START TEST blockdev_nvme 00:06:56.161 ************************************ 00:06:56.161 20:49:14 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:56.422 * Looking for test storage... 00:06:56.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:56.422 20:49:14 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.422 20:49:14 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.422 20:49:14 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.422 20:49:14 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.422 20:49:14 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:56.422 20:49:14 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.422 20:49:14 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.422 --rc genhtml_branch_coverage=1 00:06:56.422 --rc genhtml_function_coverage=1 00:06:56.422 --rc genhtml_legend=1 00:06:56.422 --rc geninfo_all_blocks=1 00:06:56.422 --rc geninfo_unexecuted_blocks=1 00:06:56.422 00:06:56.422 ' 00:06:56.422 20:49:14 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.422 --rc genhtml_branch_coverage=1 00:06:56.422 --rc genhtml_function_coverage=1 00:06:56.422 --rc genhtml_legend=1 00:06:56.422 --rc geninfo_all_blocks=1 00:06:56.422 --rc geninfo_unexecuted_blocks=1 00:06:56.422 00:06:56.422 ' 00:06:56.422 20:49:14 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.423 --rc genhtml_branch_coverage=1 00:06:56.423 --rc genhtml_function_coverage=1 00:06:56.423 --rc genhtml_legend=1 00:06:56.423 --rc geninfo_all_blocks=1 00:06:56.423 --rc geninfo_unexecuted_blocks=1 00:06:56.423 00:06:56.423 ' 00:06:56.423 20:49:14 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.423 --rc genhtml_branch_coverage=1 00:06:56.423 --rc genhtml_function_coverage=1 00:06:56.423 --rc genhtml_legend=1 00:06:56.423 --rc geninfo_all_blocks=1 00:06:56.423 --rc geninfo_unexecuted_blocks=1 00:06:56.423 00:06:56.423 ' 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:56.423 20:49:14 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:56.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71503 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71503 00:06:56.423 20:49:14 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71503 ']' 00:06:56.423 20:49:14 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.423 20:49:14 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.423 20:49:14 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.423 20:49:14 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.423 20:49:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:56.423 20:49:14 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:56.423 [2024-11-20 20:49:14.448895] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:56.423 [2024-11-20 20:49:14.449017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71503 ] 00:06:56.684 [2024-11-20 20:49:14.596913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.685 [2024-11-20 20:49:14.621009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.257 20:49:15 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.257 20:49:15 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:57.257 20:49:15 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:57.257 20:49:15 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:57.257 20:49:15 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:57.257 20:49:15 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:57.257 20:49:15 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:57.257 20:49:15 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:57.257 20:49:15 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.257 20:49:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.518 20:49:15 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.518 20:49:15 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:57.518 20:49:15 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.518 20:49:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.518 20:49:15 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.518 20:49:15 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:57.518 20:49:15 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:57.518 20:49:15 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.518 20:49:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.518 20:49:15 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.518 20:49:15 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:57.518 20:49:15 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.518 20:49:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.780 20:49:15 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.780 20:49:15 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:57.780 20:49:15 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.780 20:49:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.780 20:49:15 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.780 20:49:15 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:57.780 20:49:15 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:57.780 20:49:15 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:57.780 20:49:15 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.780 20:49:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.780 20:49:15 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.780 20:49:15 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:57.780 20:49:15 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:57.781 20:49:15 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3efdbc17-2767-436f-adc8-174a85c32481"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3efdbc17-2767-436f-adc8-174a85c32481",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "c5404f22-71d1-4330-8c45-edb73f0c3910"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "c5404f22-71d1-4330-8c45-edb73f0c3910",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5a7d6b1a-a7e5-4592-bff0-17944f42af29"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5a7d6b1a-a7e5-4592-bff0-17944f42af29",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fce4bf3b-43b5-467d-af2f-70585ae03a9d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fce4bf3b-43b5-467d-af2f-70585ae03a9d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "af8c8795-81c6-4ee1-9e8e-f97c2bdf4253"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "af8c8795-81c6-4ee1-9e8e-f97c2bdf4253",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7ae89b0f-753f-4283-82e7-d8ad5fe240c4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7ae89b0f-753f-4283-82e7-d8ad5fe240c4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:57.781 20:49:15 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:57.781 20:49:15 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:57.781 20:49:15 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:57.781 20:49:15 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 71503 00:06:57.781 20:49:15 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71503 ']' 00:06:57.781 20:49:15 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71503 00:06:57.781 20:49:15 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:57.781 20:49:15 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.781 20:49:15 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71503 00:06:57.781 killing process with pid 71503 00:06:57.781 20:49:15 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.781 20:49:15 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.781 20:49:15 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71503' 00:06:57.781 20:49:15 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71503 00:06:57.781 20:49:15 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71503 00:06:58.042 20:49:16 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:58.042 20:49:16 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:58.042 20:49:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:58.042 20:49:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.042 20:49:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.042 ************************************ 00:06:58.042 START TEST bdev_hello_world 00:06:58.042 ************************************ 00:06:58.042 20:49:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:58.042 [2024-11-20 20:49:16.152738] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:58.042 [2024-11-20 20:49:16.152865] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71576 ] 00:06:58.303 [2024-11-20 20:49:16.298241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.303 [2024-11-20 20:49:16.321667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.876 [2024-11-20 20:49:16.709462] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:58.876 [2024-11-20 20:49:16.709516] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:58.876 [2024-11-20 20:49:16.709538] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:58.876 [2024-11-20 20:49:16.711730] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:58.876 [2024-11-20 20:49:16.712231] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:58.876 [2024-11-20 20:49:16.712253] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:58.876 [2024-11-20 20:49:16.712506] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:58.876 00:06:58.876 [2024-11-20 20:49:16.712523] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:58.876 ************************************ 00:06:58.876 END TEST bdev_hello_world 00:06:58.876 ************************************ 00:06:58.876 00:06:58.876 real 0m0.784s 00:06:58.876 user 0m0.519s 00:06:58.876 sys 0m0.160s 00:06:58.876 20:49:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.876 20:49:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:58.876 20:49:16 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:58.876 20:49:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.876 20:49:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.876 20:49:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.876 ************************************ 00:06:58.876 START TEST bdev_bounds 00:06:58.876 ************************************ 00:06:58.876 Process bdevio pid: 71607 00:06:58.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71607 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71607' 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71607 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71607 ']' 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.876 20:49:16 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:58.876 [2024-11-20 20:49:16.981011] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:06:58.876 [2024-11-20 20:49:16.981725] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71607 ] 00:06:59.137 [2024-11-20 20:49:17.126644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.137 [2024-11-20 20:49:17.152133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.137 [2024-11-20 20:49:17.152386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.137 [2024-11-20 20:49:17.152468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.079 20:49:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.079 20:49:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:00.079 20:49:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:00.079 I/O targets: 00:07:00.079 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:00.079 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:00.079 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.079 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.079 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.079 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:00.079 00:07:00.079 00:07:00.079 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.079 http://cunit.sourceforge.net/ 00:07:00.079 00:07:00.079 00:07:00.079 Suite: bdevio tests on: Nvme3n1 00:07:00.079 Test: blockdev write read block ...passed 00:07:00.079 Test: blockdev write zeroes read block ...passed 00:07:00.079 Test: blockdev write zeroes read no split ...passed 00:07:00.079 Test: blockdev write zeroes read split ...passed 00:07:00.079 Test: blockdev write zeroes read split partial ...passed 00:07:00.079 Test: blockdev reset ...[2024-11-20 20:49:17.943232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:00.079 [2024-11-20 20:49:17.945181] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:00.079 passed 00:07:00.079 Test: blockdev write read 8 blocks ...passed 00:07:00.079 Test: blockdev write read size > 128k ...passed 00:07:00.079 Test: blockdev write read invalid size ...passed 00:07:00.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.079 Test: blockdev write read max offset ...passed 00:07:00.079 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.079 Test: blockdev writev readv 8 blocks ...passed 00:07:00.079 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.079 Test: blockdev writev readv block ...passed 00:07:00.079 Test: blockdev writev readv size > 128k ...passed 00:07:00.079 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.079 Test: blockdev comparev and writev ...[2024-11-20 20:49:17.953624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6a0a000 len:0x1000 00:07:00.079 [2024-11-20 20:49:17.953685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.079 passed 00:07:00.079 Test: blockdev nvme passthru rw ...passed 00:07:00.079 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.079 Test: blockdev nvme admin passthru ...[2024-11-20 20:49:17.954445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.079 [2024-11-20 20:49:17.954477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.079 passed 00:07:00.079 Test: blockdev copy ...passed 00:07:00.079 Suite: bdevio tests on: Nvme2n3 00:07:00.079 Test: blockdev write read block ...passed 00:07:00.079 Test: blockdev write zeroes read block ...passed 00:07:00.079 Test: blockdev write zeroes read no split ...passed 00:07:00.079 Test: blockdev write zeroes read split ...passed 00:07:00.079 Test: blockdev write zeroes read split partial ...passed 00:07:00.079 Test: blockdev reset ...[2024-11-20 20:49:17.977409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.079 [2024-11-20 20:49:17.979947] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:00.079 Test: blockdev write read 8 blocks ...uccessful. 00:07:00.079 passed 00:07:00.079 Test: blockdev write read size > 128k ...passed 00:07:00.079 Test: blockdev write read invalid size ...passed 00:07:00.079 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.079 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.079 Test: blockdev write read max offset ...passed 00:07:00.079 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.079 Test: blockdev writev readv 8 blocks ...passed 00:07:00.079 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.079 Test: blockdev writev readv block ...passed 00:07:00.079 Test: blockdev writev readv size > 128k ...passed 00:07:00.079 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.079 Test: blockdev comparev and writev ...[2024-11-20 20:49:17.990273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6a03000 len:0x1000 00:07:00.079 [2024-11-20 20:49:17.990387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.079 passed 00:07:00.079 Test: blockdev nvme passthru rw ...passed 00:07:00.079 Test: blockdev nvme passthru vendor specific ...[2024-11-20 20:49:17.991658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.079 [2024-11-20 20:49:17.991700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.079 passed 00:07:00.080 Test: blockdev nvme admin passthru ...passed 00:07:00.080 Test: blockdev copy ...passed 00:07:00.080 Suite: bdevio tests on: Nvme2n2 00:07:00.080 Test: blockdev write read block ...passed 00:07:00.080 Test: blockdev write zeroes read block ...passed 00:07:00.080 Test: blockdev write zeroes read no split ...passed 00:07:00.080 Test: blockdev write zeroes read split ...passed 00:07:00.080 Test: blockdev write zeroes read split partial ...passed 00:07:00.080 Test: blockdev reset ...[2024-11-20 20:49:18.012897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.080 [2024-11-20 20:49:18.014984] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.080 passed 00:07:00.080 Test: blockdev write read 8 blocks ...passed 00:07:00.080 Test: blockdev write read size > 128k ...passed 00:07:00.080 Test: blockdev write read invalid size ...passed 00:07:00.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.080 Test: blockdev write read max offset ...passed 00:07:00.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.080 Test: blockdev writev readv 8 blocks ...passed 00:07:00.080 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.080 Test: blockdev writev readv block ...passed 00:07:00.080 Test: blockdev writev readv size > 128k ...passed 00:07:00.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.080 Test: blockdev comparev and writev ...[2024-11-20 20:49:18.020672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6a03000 len:0x1000 00:07:00.080 [2024-11-20 20:49:18.020722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.080 passed 00:07:00.080 Test: blockdev nvme passthru rw ...passed 00:07:00.080 Test: blockdev nvme passthru vendor specific ...[2024-11-20 20:49:18.021335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.080 [2024-11-20 20:49:18.021353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.080 passed 00:07:00.080 Test: blockdev nvme admin passthru ...passed 00:07:00.080 Test: blockdev copy ...passed 00:07:00.080 Suite: bdevio tests on: Nvme2n1 00:07:00.080 Test: blockdev write read block ...passed 00:07:00.080 Test: blockdev write zeroes read block ...passed 00:07:00.080 Test: blockdev write zeroes read no split ...passed 00:07:00.080 Test: blockdev write zeroes read split ...passed 00:07:00.080 Test: blockdev write zeroes read split partial ...passed 00:07:00.080 Test: blockdev reset ...[2024-11-20 20:49:18.036102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.080 [2024-11-20 20:49:18.038190] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:07:00.080 00:07:00.080 Test: blockdev write read 8 blocks ...passed 00:07:00.080 Test: blockdev write read size > 128k ...passed 00:07:00.080 Test: blockdev write read invalid size ...passed 00:07:00.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.080 Test: blockdev write read max offset ...passed 00:07:00.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.080 Test: blockdev writev readv 8 blocks ...passed 00:07:00.080 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.080 Test: blockdev writev readv block ...passed 00:07:00.080 Test: blockdev writev readv size > 128k ...passed 00:07:00.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.080 Test: blockdev comparev and writev ...[2024-11-20 20:49:18.043699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:00.080 Test: blockdev nvme passthru rw ...passed 00:07:00.080 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2c6a03000 len:0x1000 00:07:00.080 [2024-11-20 20:49:18.043846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.080 passed 00:07:00.080 Test: blockdev nvme admin passthru ...[2024-11-20 20:49:18.044372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.080 [2024-11-20 20:49:18.044395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.080 passed 00:07:00.080 Test: blockdev copy ...passed 00:07:00.080 Suite: bdevio tests on: Nvme1n1 00:07:00.080 Test: blockdev write read block ...passed 00:07:00.080 Test: blockdev write zeroes read block ...passed 00:07:00.080 Test: blockdev write zeroes read no split ...passed 00:07:00.080 Test: blockdev write zeroes read split ...passed 00:07:00.080 Test: blockdev write zeroes read split partial ...passed 00:07:00.080 Test: blockdev reset ...[2024-11-20 20:49:18.056645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:00.080 passed 00:07:00.080 Test: blockdev write read 8 blocks ...[2024-11-20 20:49:18.058437] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:00.080 passed 00:07:00.080 Test: blockdev write read size > 128k ...passed 00:07:00.080 Test: blockdev write read invalid size ...passed 00:07:00.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.080 Test: blockdev write read max offset ...passed 00:07:00.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.080 Test: blockdev writev readv 8 blocks ...passed 00:07:00.080 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.080 Test: blockdev writev readv block ...passed 00:07:00.080 Test: blockdev writev readv size > 128k ...passed 00:07:00.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.080 Test: blockdev comparev and writev ...[2024-11-20 20:49:18.062931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:00.080 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2dda36000 len:0x1000 00:07:00.080 [2024-11-20 20:49:18.063046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.080 passed 00:07:00.080 Test: blockdev nvme passthru vendor specific ...[2024-11-20 20:49:18.063765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.080 [2024-11-20 20:49:18.063793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.080 passed 00:07:00.080 Test: blockdev nvme admin passthru ...passed 00:07:00.080 Test: blockdev copy ...passed 00:07:00.080 Suite: bdevio tests on: Nvme0n1 00:07:00.080 Test: blockdev write read block ...passed 00:07:00.080 Test: blockdev write zeroes read block ...passed 00:07:00.080 Test: blockdev write zeroes read no split ...passed 00:07:00.080 Test: blockdev write zeroes read split ...passed 00:07:00.080 Test: blockdev write zeroes read split partial ...passed 00:07:00.080 Test: blockdev reset ...[2024-11-20 20:49:18.079180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:00.080 passed 00:07:00.080 Test: blockdev write read 8 blocks ...[2024-11-20 20:49:18.081019] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:00.080 passed 00:07:00.080 Test: blockdev write read size > 128k ...passed 00:07:00.080 Test: blockdev write read invalid size ...passed 00:07:00.080 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.080 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.080 Test: blockdev write read max offset ...passed 00:07:00.080 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.080 Test: blockdev writev readv 8 blocks ...passed 00:07:00.080 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.080 Test: blockdev writev readv block ...passed 00:07:00.080 Test: blockdev writev readv size > 128k ...passed 00:07:00.080 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.080 Test: blockdev comparev and writev ...[2024-11-20 20:49:18.085485] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:00.080 separate metadata which is not supported yet. 00:07:00.080 passed 00:07:00.080 Test: blockdev nvme passthru rw ...passed 00:07:00.080 Test: blockdev nvme passthru vendor specific ...[2024-11-20 20:49:18.086309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:00.080 [2024-11-20 20:49:18.086340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:00.080 passed 00:07:00.080 Test: blockdev nvme admin passthru ...passed 00:07:00.080 Test: blockdev copy ...passed 00:07:00.080 00:07:00.080 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.080 suites 6 6 n/a 0 0 00:07:00.080 tests 138 138 138 0 0 00:07:00.080 asserts 893 893 893 0 n/a 00:07:00.080 00:07:00.080 Elapsed time = 0.364 seconds 00:07:00.080 0 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71607 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71607 ']' 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71607 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71607 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71607' 00:07:00.080 killing process with pid 71607 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71607 00:07:00.080 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71607 00:07:00.342 20:49:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:00.342 00:07:00.342 real 0m1.342s 00:07:00.342 user 0m3.454s 00:07:00.342 sys 0m0.267s 00:07:00.342 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.342 20:49:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:00.342 ************************************ 00:07:00.342 END TEST bdev_bounds 00:07:00.342 ************************************ 00:07:00.342 20:49:18 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:00.342 20:49:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:00.342 20:49:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.342 20:49:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.342 ************************************ 00:07:00.342 START TEST bdev_nbd 00:07:00.342 ************************************ 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71650 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71650 /var/tmp/spdk-nbd.sock 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71650 ']' 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.342 20:49:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:00.342 [2024-11-20 20:49:18.375612] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:00.342 [2024-11-20 20:49:18.375891] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.603 [2024-11-20 20:49:18.517211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.603 [2024-11-20 20:49:18.542001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.175 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.437 1+0 records in 00:07:01.437 1+0 records out 00:07:01.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269288 s, 15.2 MB/s 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.437 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.699 1+0 records in 00:07:01.699 1+0 records out 00:07:01.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452148 s, 9.1 MB/s 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.699 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.960 1+0 records in 00:07:01.960 1+0 records out 00:07:01.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114971 s, 3.6 MB/s 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:01.960 20:49:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.221 1+0 records in 00:07:02.221 1+0 records out 00:07:02.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509207 s, 8.0 MB/s 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.221 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.483 1+0 records in 00:07:02.483 1+0 records out 00:07:02.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037345 s, 11.0 MB/s 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.483 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.745 1+0 records in 00:07:02.745 1+0 records out 00:07:02.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432011 s, 9.5 MB/s 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.745 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:02.746 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd0", 00:07:03.007 "bdev_name": "Nvme0n1" 00:07:03.007 }, 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd1", 00:07:03.007 "bdev_name": "Nvme1n1" 00:07:03.007 }, 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd2", 00:07:03.007 "bdev_name": "Nvme2n1" 00:07:03.007 }, 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd3", 00:07:03.007 "bdev_name": "Nvme2n2" 00:07:03.007 }, 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd4", 00:07:03.007 "bdev_name": "Nvme2n3" 00:07:03.007 }, 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd5", 00:07:03.007 "bdev_name": "Nvme3n1" 00:07:03.007 } 00:07:03.007 ]' 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd0", 00:07:03.007 "bdev_name": "Nvme0n1" 00:07:03.007 }, 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd1", 00:07:03.007 "bdev_name": "Nvme1n1" 00:07:03.007 }, 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd2", 00:07:03.007 "bdev_name": "Nvme2n1" 00:07:03.007 }, 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd3", 00:07:03.007 "bdev_name": "Nvme2n2" 00:07:03.007 }, 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd4", 00:07:03.007 "bdev_name": "Nvme2n3" 00:07:03.007 }, 00:07:03.007 { 00:07:03.007 "nbd_device": "/dev/nbd5", 00:07:03.007 "bdev_name": "Nvme3n1" 00:07:03.007 } 00:07:03.007 ]' 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.007 20:49:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:03.268 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:03.268 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:03.268 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:03.268 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.268 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.268 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:03.268 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.268 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.268 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.268 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:03.269 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:03.269 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:03.269 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:03.269 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.269 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.269 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.529 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:03.791 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:03.791 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:03.791 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:03.791 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.791 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.791 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:03.791 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.791 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.791 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.791 20:49:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:04.053 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:04.053 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:04.053 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:04.053 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.053 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.053 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:04.053 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.053 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.053 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.053 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.314 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.576 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:04.838 /dev/nbd0 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.838 1+0 records in 00:07:04.838 1+0 records out 00:07:04.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000591055 s, 6.9 MB/s 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:04.838 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:05.100 /dev/nbd1 00:07:05.100 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.100 20:49:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.100 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.100 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.100 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.100 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.100 20:49:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.100 1+0 records in 00:07:05.100 1+0 records out 00:07:05.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493921 s, 8.3 MB/s 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.100 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:05.100 /dev/nbd10 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.361 1+0 records in 00:07:05.361 1+0 records out 00:07:05.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100661 s, 4.1 MB/s 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:05.361 /dev/nbd11 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:05.361 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.362 1+0 records in 00:07:05.362 1+0 records out 00:07:05.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340542 s, 12.0 MB/s 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.362 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:05.623 /dev/nbd12 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.623 1+0 records in 00:07:05.623 1+0 records out 00:07:05.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593696 s, 6.9 MB/s 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.623 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.624 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.624 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.624 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.624 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.624 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:05.885 /dev/nbd13 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.885 1+0 records in 00:07:05.885 1+0 records out 00:07:05.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434781 s, 9.4 MB/s 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.885 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.886 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.886 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.886 20:49:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.886 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.886 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:05.886 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.886 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.886 20:49:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd0", 00:07:06.148 "bdev_name": "Nvme0n1" 00:07:06.148 }, 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd1", 00:07:06.148 "bdev_name": "Nvme1n1" 00:07:06.148 }, 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd10", 00:07:06.148 "bdev_name": "Nvme2n1" 00:07:06.148 }, 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd11", 00:07:06.148 "bdev_name": "Nvme2n2" 00:07:06.148 }, 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd12", 00:07:06.148 "bdev_name": "Nvme2n3" 00:07:06.148 }, 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd13", 00:07:06.148 "bdev_name": "Nvme3n1" 00:07:06.148 } 00:07:06.148 ]' 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd0", 00:07:06.148 "bdev_name": "Nvme0n1" 00:07:06.148 }, 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd1", 00:07:06.148 "bdev_name": "Nvme1n1" 00:07:06.148 }, 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd10", 00:07:06.148 "bdev_name": "Nvme2n1" 00:07:06.148 }, 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd11", 00:07:06.148 "bdev_name": "Nvme2n2" 00:07:06.148 }, 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd12", 00:07:06.148 "bdev_name": "Nvme2n3" 00:07:06.148 }, 00:07:06.148 { 00:07:06.148 "nbd_device": "/dev/nbd13", 00:07:06.148 "bdev_name": "Nvme3n1" 00:07:06.148 } 00:07:06.148 ]' 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:06.148 /dev/nbd1 00:07:06.148 /dev/nbd10 00:07:06.148 /dev/nbd11 00:07:06.148 /dev/nbd12 00:07:06.148 /dev/nbd13' 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:06.148 /dev/nbd1 00:07:06.148 /dev/nbd10 00:07:06.148 /dev/nbd11 00:07:06.148 /dev/nbd12 00:07:06.148 /dev/nbd13' 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:06.148 256+0 records in 00:07:06.148 256+0 records out 00:07:06.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00688518 s, 152 MB/s 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.148 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:06.409 256+0 records in 00:07:06.409 256+0 records out 00:07:06.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.179518 s, 5.8 MB/s 00:07:06.409 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.409 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:06.699 256+0 records in 00:07:06.699 256+0 records out 00:07:06.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.180358 s, 5.8 MB/s 00:07:06.699 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.699 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:06.699 256+0 records in 00:07:06.699 256+0 records out 00:07:06.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17476 s, 6.0 MB/s 00:07:06.699 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.699 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:06.960 256+0 records in 00:07:06.960 256+0 records out 00:07:06.960 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.220608 s, 4.8 MB/s 00:07:06.960 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.960 20:49:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:07.222 256+0 records in 00:07:07.222 256+0 records out 00:07:07.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161208 s, 6.5 MB/s 00:07:07.222 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.222 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:07.483 256+0 records in 00:07:07.483 256+0 records out 00:07:07.483 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.201513 s, 5.2 MB/s 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:07.483 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.484 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.745 20:49:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:08.007 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:08.007 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:08.007 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:08.007 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.007 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.007 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:08.007 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.007 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.007 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.007 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:08.269 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:08.269 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:08.269 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:08.269 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.269 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.269 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:08.269 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.269 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.269 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.269 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:08.531 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:08.531 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:08.531 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:08.531 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.531 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.531 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:08.531 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.531 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.531 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.531 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.793 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:09.055 20:49:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:09.316 malloc_lvol_verify 00:07:09.316 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:09.316 7576dbee-b176-4594-95d8-2a2698dca13d 00:07:09.577 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:09.577 2472b73a-c51d-4c22-a764-6ecf79ca0984 00:07:09.577 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:09.837 /dev/nbd0 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:09.838 mke2fs 1.47.0 (5-Feb-2023) 00:07:09.838 Discarding device blocks: 0/4096 done 00:07:09.838 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:09.838 00:07:09.838 Allocating group tables: 0/1 done 00:07:09.838 Writing inode tables: 0/1 done 00:07:09.838 Creating journal (1024 blocks): done 00:07:09.838 Writing superblocks and filesystem accounting information: 0/1 done 00:07:09.838 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.838 20:49:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71650 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71650 ']' 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71650 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71650 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.099 killing process with pid 71650 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71650' 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71650 00:07:10.099 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71650 00:07:10.361 20:49:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:10.361 00:07:10.361 real 0m10.154s 00:07:10.361 user 0m14.281s 00:07:10.361 sys 0m3.369s 00:07:10.361 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.361 ************************************ 00:07:10.361 END TEST bdev_nbd 00:07:10.361 ************************************ 00:07:10.361 20:49:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:10.623 20:49:28 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:10.623 20:49:28 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:10.623 skipping fio tests on NVMe due to multi-ns failures. 00:07:10.623 20:49:28 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:10.623 20:49:28 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:10.623 20:49:28 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:10.623 20:49:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:10.623 20:49:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.623 20:49:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.623 ************************************ 00:07:10.623 START TEST bdev_verify 00:07:10.623 ************************************ 00:07:10.623 20:49:28 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:10.623 [2024-11-20 20:49:28.602983] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:10.623 [2024-11-20 20:49:28.603132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72029 ] 00:07:10.884 [2024-11-20 20:49:28.750835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.884 [2024-11-20 20:49:28.789810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.884 [2024-11-20 20:49:28.789819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.457 Running I/O for 5 seconds... 00:07:13.345 17408.00 IOPS, 68.00 MiB/s [2024-11-20T20:49:32.841Z] 17984.00 IOPS, 70.25 MiB/s [2024-11-20T20:49:33.779Z] 19733.33 IOPS, 77.08 MiB/s [2024-11-20T20:49:34.721Z] 19824.00 IOPS, 77.44 MiB/s [2024-11-20T20:49:34.721Z] 19571.20 IOPS, 76.45 MiB/s 00:07:16.602 Latency(us) 00:07:16.602 [2024-11-20T20:49:34.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.602 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0x0 length 0xbd0bd 00:07:16.602 Nvme0n1 : 5.08 1663.97 6.50 0.00 0.00 76773.97 16837.71 75820.11 00:07:16.602 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:16.602 Nvme0n1 : 5.06 1568.79 6.13 0.00 0.00 81366.67 17039.36 80659.69 00:07:16.602 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0x0 length 0xa0000 00:07:16.602 Nvme1n1 : 5.08 1663.51 6.50 0.00 0.00 76718.04 15022.87 73400.32 00:07:16.602 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0xa0000 length 0xa0000 00:07:16.602 Nvme1n1 : 5.06 1568.36 6.13 0.00 0.00 81211.79 20366.57 79046.50 00:07:16.602 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0x0 length 0x80000 00:07:16.602 Nvme2n1 : 5.08 1663.05 6.50 0.00 0.00 76598.82 15224.52 66947.54 00:07:16.602 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0x80000 length 0x80000 00:07:16.602 Nvme2n1 : 5.06 1567.91 6.12 0.00 0.00 81075.29 20064.10 77030.01 00:07:16.602 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0x0 length 0x80000 00:07:16.602 Nvme2n2 : 5.08 1662.04 6.49 0.00 0.00 76485.31 16736.89 64124.46 00:07:16.602 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0x80000 length 0x80000 00:07:16.602 Nvme2n2 : 5.06 1567.44 6.12 0.00 0.00 80929.06 21878.94 74610.22 00:07:16.602 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0x0 length 0x80000 00:07:16.602 Nvme2n3 : 5.08 1661.59 6.49 0.00 0.00 76366.61 16837.71 64931.05 00:07:16.602 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0x80000 length 0x80000 00:07:16.602 Nvme2n3 : 5.06 1566.98 6.12 0.00 0.00 80847.37 18249.26 73400.32 00:07:16.602 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0x0 length 0x20000 00:07:16.602 Nvme3n1 : 5.09 1661.14 6.49 0.00 0.00 76257.74 16636.06 66947.54 00:07:16.602 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:16.602 Verification LBA range: start 0x20000 length 0x20000 00:07:16.602 Nvme3n1 : 5.07 1576.41 6.16 0.00 0.00 80296.13 3528.86 77836.60 00:07:16.602 [2024-11-20T20:49:34.721Z] =================================================================================================================== 00:07:16.602 [2024-11-20T20:49:34.721Z] Total : 19391.20 75.75 0.00 0.00 78675.88 3528.86 80659.69 00:07:17.199 00:07:17.199 real 0m6.564s 00:07:17.199 user 0m12.214s 00:07:17.199 sys 0m0.312s 00:07:17.199 20:49:35 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.199 ************************************ 00:07:17.199 END TEST bdev_verify 00:07:17.199 ************************************ 00:07:17.199 20:49:35 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:17.199 20:49:35 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:17.199 20:49:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:17.199 20:49:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.199 20:49:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.199 ************************************ 00:07:17.199 START TEST bdev_verify_big_io 00:07:17.199 ************************************ 00:07:17.199 20:49:35 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:17.199 [2024-11-20 20:49:35.234940] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:17.199 [2024-11-20 20:49:35.235085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72121 ] 00:07:17.460 [2024-11-20 20:49:35.383094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.460 [2024-11-20 20:49:35.426468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.460 [2024-11-20 20:49:35.426517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.034 Running I/O for 5 seconds... 00:07:22.712 959.00 IOPS, 59.94 MiB/s [2024-11-20T20:49:42.209Z] 2556.00 IOPS, 159.75 MiB/s [2024-11-20T20:49:42.209Z] 3386.00 IOPS, 211.62 MiB/s 00:07:24.090 Latency(us) 00:07:24.090 [2024-11-20T20:49:42.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.090 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.090 Verification LBA range: start 0x0 length 0xbd0b 00:07:24.090 Nvme0n1 : 5.63 136.35 8.52 0.00 0.00 907676.88 35893.56 987274.63 00:07:24.090 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.090 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:24.090 Nvme0n1 : 5.64 141.91 8.87 0.00 0.00 868029.11 35893.56 1155046.79 00:07:24.090 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.090 Verification LBA range: start 0x0 length 0xa000 00:07:24.090 Nvme1n1 : 5.63 136.30 8.52 0.00 0.00 880842.44 88725.66 825955.25 00:07:24.090 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.090 Verification LBA range: start 0xa000 length 0xa000 00:07:24.090 Nvme1n1 : 5.64 142.21 8.89 0.00 0.00 841812.62 81062.99 1374441.16 00:07:24.090 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.090 Verification LBA range: start 0x0 length 0x8000 00:07:24.090 Nvme2n1 : 5.78 137.00 8.56 0.00 0.00 844789.59 143574.25 832408.02 00:07:24.090 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.090 Verification LBA range: start 0x8000 length 0x8000 00:07:24.090 Nvme2n1 : 5.74 146.31 9.14 0.00 0.00 786641.81 81869.59 1206669.00 00:07:24.090 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.090 Verification LBA range: start 0x0 length 0x8000 00:07:24.090 Nvme2n2 : 5.91 147.95 9.25 0.00 0.00 770203.85 38111.70 851766.35 00:07:24.090 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.090 Verification LBA range: start 0x8000 length 0x8000 00:07:24.090 Nvme2n2 : 5.85 150.15 9.38 0.00 0.00 746494.89 57671.68 1677721.60 00:07:24.090 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.091 Verification LBA range: start 0x0 length 0x8000 00:07:24.091 Nvme2n3 : 5.91 147.58 9.22 0.00 0.00 747587.70 37910.06 871124.68 00:07:24.091 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.091 Verification LBA range: start 0x8000 length 0x8000 00:07:24.091 Nvme2n3 : 5.94 159.50 9.97 0.00 0.00 680575.74 17745.13 1703532.70 00:07:24.091 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.091 Verification LBA range: start 0x0 length 0x2000 00:07:24.091 Nvme3n1 : 5.92 162.29 10.14 0.00 0.00 667188.82 1039.75 890483.00 00:07:24.091 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.091 Verification LBA range: start 0x2000 length 0x2000 00:07:24.091 Nvme3n1 : 5.98 197.44 12.34 0.00 0.00 538441.52 1001.94 1329271.73 00:07:24.091 [2024-11-20T20:49:42.210Z] =================================================================================================================== 00:07:24.091 [2024-11-20T20:49:42.210Z] Total : 1805.01 112.81 0.00 0.00 760768.50 1001.94 1703532.70 00:07:25.033 ************************************ 00:07:25.033 END TEST bdev_verify_big_io 00:07:25.033 ************************************ 00:07:25.033 00:07:25.033 real 0m7.812s 00:07:25.033 user 0m14.291s 00:07:25.033 sys 0m0.320s 00:07:25.033 20:49:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.033 20:49:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:25.033 20:49:43 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.033 20:49:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:25.033 20:49:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.033 20:49:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:25.033 ************************************ 00:07:25.033 START TEST bdev_write_zeroes 00:07:25.033 ************************************ 00:07:25.033 20:49:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.033 [2024-11-20 20:49:43.100473] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:25.033 [2024-11-20 20:49:43.100606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72222 ] 00:07:25.291 [2024-11-20 20:49:43.240080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.291 [2024-11-20 20:49:43.265968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.549 Running I/O for 1 seconds... 00:07:26.928 49005.00 IOPS, 191.43 MiB/s 00:07:26.928 Latency(us) 00:07:26.928 [2024-11-20T20:49:45.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.928 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.928 Nvme0n1 : 1.02 8007.47 31.28 0.00 0.00 15954.02 6301.54 143574.25 00:07:26.928 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.928 Nvme1n1 : 1.03 8213.77 32.09 0.00 0.00 15540.98 8469.27 115343.36 00:07:26.928 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.928 Nvme2n1 : 1.03 8168.39 31.91 0.00 0.00 15541.47 9376.69 119376.34 00:07:26.928 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.928 Nvme2n2 : 1.03 8159.17 31.87 0.00 0.00 15538.34 9175.04 119376.34 00:07:26.928 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.928 Nvme2n3 : 1.03 8149.86 31.84 0.00 0.00 15518.34 9326.28 119376.34 00:07:26.928 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:26.928 Nvme3n1 : 1.03 8140.70 31.80 0.00 0.00 15501.81 8519.68 120182.94 00:07:26.928 [2024-11-20T20:49:45.047Z] =================================================================================================================== 00:07:26.928 [2024-11-20T20:49:45.047Z] Total : 48839.35 190.78 0.00 0.00 15597.82 6301.54 143574.25 00:07:26.928 00:07:26.928 real 0m1.850s 00:07:26.928 user 0m1.554s 00:07:26.928 sys 0m0.184s 00:07:26.928 20:49:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.928 20:49:44 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:26.928 ************************************ 00:07:26.928 END TEST bdev_write_zeroes 00:07:26.928 ************************************ 00:07:26.929 20:49:44 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.929 20:49:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:26.929 20:49:44 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.929 20:49:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.929 ************************************ 00:07:26.929 START TEST bdev_json_nonenclosed 00:07:26.929 ************************************ 00:07:26.929 20:49:44 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:26.929 [2024-11-20 20:49:45.003325] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:26.929 [2024-11-20 20:49:45.003447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72264 ] 00:07:27.188 [2024-11-20 20:49:45.150558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.189 [2024-11-20 20:49:45.180281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.189 [2024-11-20 20:49:45.180380] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:27.189 [2024-11-20 20:49:45.180398] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:27.189 [2024-11-20 20:49:45.180414] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.189 00:07:27.189 real 0m0.317s 00:07:27.189 user 0m0.122s 00:07:27.189 sys 0m0.092s 00:07:27.189 20:49:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.189 ************************************ 00:07:27.189 END TEST bdev_json_nonenclosed 00:07:27.189 ************************************ 00:07:27.189 20:49:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:27.449 20:49:45 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.449 20:49:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:27.449 20:49:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.449 20:49:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.450 ************************************ 00:07:27.450 START TEST bdev_json_nonarray 00:07:27.450 ************************************ 00:07:27.450 20:49:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.450 [2024-11-20 20:49:45.381219] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:27.450 [2024-11-20 20:49:45.381337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72284 ] 00:07:27.450 [2024-11-20 20:49:45.527391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.450 [2024-11-20 20:49:45.553077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.450 [2024-11-20 20:49:45.553192] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:27.450 [2024-11-20 20:49:45.553212] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:27.450 [2024-11-20 20:49:45.553233] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.711 00:07:27.711 real 0m0.307s 00:07:27.711 user 0m0.122s 00:07:27.711 sys 0m0.082s 00:07:27.712 ************************************ 00:07:27.712 20:49:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.712 20:49:45 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:27.712 END TEST bdev_json_nonarray 00:07:27.712 ************************************ 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:27.712 20:49:45 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:27.712 00:07:27.712 real 0m31.456s 00:07:27.712 user 0m48.538s 00:07:27.712 sys 0m5.511s 00:07:27.712 20:49:45 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.712 ************************************ 00:07:27.712 20:49:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.712 END TEST blockdev_nvme 00:07:27.712 ************************************ 00:07:27.712 20:49:45 -- spdk/autotest.sh@209 -- # uname -s 00:07:27.712 20:49:45 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:27.712 20:49:45 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:27.712 20:49:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.712 20:49:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.712 20:49:45 -- common/autotest_common.sh@10 -- # set +x 00:07:27.712 ************************************ 00:07:27.712 START TEST blockdev_nvme_gpt 00:07:27.712 ************************************ 00:07:27.712 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:27.712 * Looking for test storage... 00:07:27.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:27.712 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:27.712 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.712 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.973 20:49:45 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.973 --rc genhtml_branch_coverage=1 00:07:27.973 --rc genhtml_function_coverage=1 00:07:27.973 --rc genhtml_legend=1 00:07:27.973 --rc geninfo_all_blocks=1 00:07:27.973 --rc geninfo_unexecuted_blocks=1 00:07:27.973 00:07:27.973 ' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.973 --rc genhtml_branch_coverage=1 00:07:27.973 --rc genhtml_function_coverage=1 00:07:27.973 --rc genhtml_legend=1 00:07:27.973 --rc geninfo_all_blocks=1 00:07:27.973 --rc geninfo_unexecuted_blocks=1 00:07:27.973 00:07:27.973 ' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.973 --rc genhtml_branch_coverage=1 00:07:27.973 --rc genhtml_function_coverage=1 00:07:27.973 --rc genhtml_legend=1 00:07:27.973 --rc geninfo_all_blocks=1 00:07:27.973 --rc geninfo_unexecuted_blocks=1 00:07:27.973 00:07:27.973 ' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.973 --rc genhtml_branch_coverage=1 00:07:27.973 --rc genhtml_function_coverage=1 00:07:27.973 --rc genhtml_legend=1 00:07:27.973 --rc geninfo_all_blocks=1 00:07:27.973 --rc geninfo_unexecuted_blocks=1 00:07:27.973 00:07:27.973 ' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72364 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72364 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72364 ']' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.973 20:49:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:27.973 [2024-11-20 20:49:46.015802] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:27.973 [2024-11-20 20:49:46.015977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72364 ] 00:07:28.234 [2024-11-20 20:49:46.166466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.234 [2024-11-20 20:49:46.203939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.807 20:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.807 20:49:46 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:28.807 20:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:28.807 20:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:28.807 20:49:46 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:29.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:29.389 Waiting for block devices as requested 00:07:29.389 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.649 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.649 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:29.649 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:34.936 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:34.936 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.936 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:34.937 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:34.937 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:34.937 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.937 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.937 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:34.937 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:34.937 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:34.937 20:49:52 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:34.937 BYT; 00:07:34.937 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:34.937 BYT; 00:07:34.937 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:34.937 20:49:52 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:34.937 20:49:52 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:35.946 The operation has completed successfully. 00:07:35.946 20:49:53 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:36.879 The operation has completed successfully. 00:07:36.879 20:49:54 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:37.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:38.011 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.011 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.011 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.011 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.011 20:49:55 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:38.011 20:49:55 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.011 20:49:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.011 [] 00:07:38.011 20:49:55 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.011 20:49:55 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:38.011 20:49:55 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:38.011 20:49:55 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:38.011 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:38.012 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:38.012 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.012 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.270 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.270 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:38.270 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.270 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.270 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.270 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.529 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.529 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:38.529 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:38.529 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.529 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.529 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:38.529 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.529 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:38.529 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:38.530 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "549be3b8-83fd-4382-8a7a-945ce8e59da1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "549be3b8-83fd-4382-8a7a-945ce8e59da1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2a8f2012-3296-4014-b07b-bbca9b7c3e03"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2a8f2012-3296-4014-b07b-bbca9b7c3e03",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "b9c1a566-29a3-4a0c-b14f-dd530d768f6b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b9c1a566-29a3-4a0c-b14f-dd530d768f6b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "81b78e25-110a-4a18-9b2b-6ac1ac5e7b49"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "81b78e25-110a-4a18-9b2b-6ac1ac5e7b49",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "6f4388f1-1483-4758-9ba6-6c8681958802"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "6f4388f1-1483-4758-9ba6-6c8681958802",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:38.530 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:38.530 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:38.530 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:38.530 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 72364 00:07:38.530 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72364 ']' 00:07:38.530 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72364 00:07:38.530 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:38.530 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.530 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72364 00:07:38.530 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.530 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.530 killing process with pid 72364 00:07:38.530 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72364' 00:07:38.530 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72364 00:07:38.530 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72364 00:07:38.791 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:38.791 20:49:56 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:38.791 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:38.791 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.791 20:49:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.791 ************************************ 00:07:38.791 START TEST bdev_hello_world 00:07:38.791 ************************************ 00:07:38.791 20:49:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:38.791 [2024-11-20 20:49:56.884034] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:38.791 [2024-11-20 20:49:56.884165] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72975 ] 00:07:39.050 [2024-11-20 20:49:57.027682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.050 [2024-11-20 20:49:57.058965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.620 [2024-11-20 20:49:57.437519] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:39.620 [2024-11-20 20:49:57.437558] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:39.620 [2024-11-20 20:49:57.437575] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:39.620 [2024-11-20 20:49:57.439350] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:39.620 [2024-11-20 20:49:57.440259] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:39.620 [2024-11-20 20:49:57.440286] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:39.620 [2024-11-20 20:49:57.440813] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:39.620 00:07:39.620 [2024-11-20 20:49:57.440836] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:39.620 00:07:39.620 real 0m0.778s 00:07:39.620 user 0m0.505s 00:07:39.620 sys 0m0.170s 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.620 ************************************ 00:07:39.620 END TEST bdev_hello_world 00:07:39.620 ************************************ 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:39.620 20:49:57 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:39.620 20:49:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:39.620 20:49:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.620 20:49:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.620 ************************************ 00:07:39.620 START TEST bdev_bounds 00:07:39.620 ************************************ 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73006 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:39.620 Process bdevio pid: 73006 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73006' 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73006 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73006 ']' 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.620 20:49:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.621 20:49:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:39.621 20:49:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:39.621 [2024-11-20 20:49:57.729391] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:39.621 [2024-11-20 20:49:57.729519] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73006 ] 00:07:39.881 [2024-11-20 20:49:57.870109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.881 [2024-11-20 20:49:57.896918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.881 [2024-11-20 20:49:57.897192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.881 [2024-11-20 20:49:57.897266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.817 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.817 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:40.817 20:49:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:40.817 I/O targets: 00:07:40.817 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:40.817 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:40.817 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:40.817 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:40.817 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:40.817 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:40.817 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:40.817 00:07:40.817 00:07:40.817 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.817 http://cunit.sourceforge.net/ 00:07:40.817 00:07:40.817 00:07:40.817 Suite: bdevio tests on: Nvme3n1 00:07:40.817 Test: blockdev write read block ...passed 00:07:40.817 Test: blockdev write zeroes read block ...passed 00:07:40.817 Test: blockdev write zeroes read no split ...passed 00:07:40.817 Test: blockdev write zeroes read split ...passed 00:07:40.817 Test: blockdev write zeroes read split partial ...passed 00:07:40.817 Test: blockdev reset ...[2024-11-20 20:49:58.676504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:40.817 passed 00:07:40.818 Test: blockdev write read 8 blocks ...[2024-11-20 20:49:58.680127] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:40.818 passed 00:07:40.818 Test: blockdev write read size > 128k ...passed 00:07:40.818 Test: blockdev write read invalid size ...passed 00:07:40.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.818 Test: blockdev write read max offset ...passed 00:07:40.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.818 Test: blockdev writev readv 8 blocks ...passed 00:07:40.818 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.818 Test: blockdev writev readv block ...passed 00:07:40.818 Test: blockdev writev readv size > 128k ...passed 00:07:40.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.818 Test: blockdev comparev and writev ...[2024-11-20 20:49:58.692785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d460a000 len:0x1000 00:07:40.818 [2024-11-20 20:49:58.692840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.818 passed 00:07:40.818 Test: blockdev nvme passthru rw ...passed 00:07:40.818 Test: blockdev nvme passthru vendor specific ...passed 00:07:40.818 Test: blockdev nvme admin passthru ...[2024-11-20 20:49:58.694939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:40.818 [2024-11-20 20:49:58.694968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:40.818 passed 00:07:40.818 Test: blockdev copy ...passed 00:07:40.818 Suite: bdevio tests on: Nvme2n3 00:07:40.818 Test: blockdev write read block ...passed 00:07:40.818 Test: blockdev write zeroes read block ...passed 00:07:40.818 Test: blockdev write zeroes read no split ...passed 00:07:40.818 Test: blockdev write zeroes read split ...passed 00:07:40.818 Test: blockdev write zeroes read split partial ...passed 00:07:40.818 Test: blockdev reset ...[2024-11-20 20:49:58.716363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:40.818 passed 00:07:40.818 Test: blockdev write read 8 blocks ...[2024-11-20 20:49:58.719515] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:40.818 passed 00:07:40.818 Test: blockdev write read size > 128k ...passed 00:07:40.818 Test: blockdev write read invalid size ...passed 00:07:40.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.818 Test: blockdev write read max offset ...passed 00:07:40.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.818 Test: blockdev writev readv 8 blocks ...passed 00:07:40.818 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.818 Test: blockdev writev readv block ...passed 00:07:40.818 Test: blockdev writev readv size > 128k ...passed 00:07:40.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.818 Test: blockdev comparev and writev ...[2024-11-20 20:49:58.731343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aaa04000 len:0x1000 00:07:40.818 [2024-11-20 20:49:58.731393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.818 passed 00:07:40.818 Test: blockdev nvme passthru rw ...passed 00:07:40.818 Test: blockdev nvme passthru vendor specific ...[2024-11-20 20:49:58.733455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:40.818 passed 00:07:40.818 Test: blockdev nvme admin passthru ...[2024-11-20 20:49:58.733497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:40.818 passed 00:07:40.818 Test: blockdev copy ...passed 00:07:40.818 Suite: bdevio tests on: Nvme2n2 00:07:40.818 Test: blockdev write read block ...passed 00:07:40.818 Test: blockdev write zeroes read block ...passed 00:07:40.818 Test: blockdev write zeroes read no split ...passed 00:07:40.818 Test: blockdev write zeroes read split ...passed 00:07:40.818 Test: blockdev write zeroes read split partial ...passed 00:07:40.818 Test: blockdev reset ...[2024-11-20 20:49:58.754378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:40.818 passed 00:07:40.818 Test: blockdev write read 8 blocks ...[2024-11-20 20:49:58.756370] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:40.818 passed 00:07:40.818 Test: blockdev write read size > 128k ...passed 00:07:40.818 Test: blockdev write read invalid size ...passed 00:07:40.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.818 Test: blockdev write read max offset ...passed 00:07:40.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.818 Test: blockdev writev readv 8 blocks ...passed 00:07:40.818 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.818 Test: blockdev writev readv block ...passed 00:07:40.818 Test: blockdev writev readv size > 128k ...passed 00:07:40.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.818 Test: blockdev comparev and writev ...[2024-11-20 20:49:58.768535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aaa04000 len:0x1000 00:07:40.818 [2024-11-20 20:49:58.768582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.818 passed 00:07:40.818 Test: blockdev nvme passthru rw ...passed 00:07:40.818 Test: blockdev nvme passthru vendor specific ...passed 00:07:40.818 Test: blockdev nvme admin passthru ...[2024-11-20 20:49:58.770218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:40.818 [2024-11-20 20:49:58.770257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:40.818 passed 00:07:40.818 Test: blockdev copy ...passed 00:07:40.818 Suite: bdevio tests on: Nvme2n1 00:07:40.818 Test: blockdev write read block ...passed 00:07:40.818 Test: blockdev write zeroes read block ...passed 00:07:40.818 Test: blockdev write zeroes read no split ...passed 00:07:40.818 Test: blockdev write zeroes read split ...passed 00:07:40.818 Test: blockdev write zeroes read split partial ...passed 00:07:40.818 Test: blockdev reset ...[2024-11-20 20:49:58.792348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:40.818 passed 00:07:40.818 Test: blockdev write read 8 blocks ...[2024-11-20 20:49:58.794388] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:40.818 passed 00:07:40.818 Test: blockdev write read size > 128k ...passed 00:07:40.818 Test: blockdev write read invalid size ...passed 00:07:40.818 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.818 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.818 Test: blockdev write read max offset ...passed 00:07:40.818 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.818 Test: blockdev writev readv 8 blocks ...passed 00:07:40.818 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.818 Test: blockdev writev readv block ...passed 00:07:40.818 Test: blockdev writev readv size > 128k ...passed 00:07:40.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.818 Test: blockdev comparev and writev ...[2024-11-20 20:49:58.803639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2aaa06000 len:0x1000 00:07:40.818 [2024-11-20 20:49:58.803683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.818 passed 00:07:40.818 Test: blockdev nvme passthru rw ...passed 00:07:40.818 Test: blockdev nvme passthru vendor specific ...[2024-11-20 20:49:58.804374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:40.818 [2024-11-20 20:49:58.804399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:40.818 passed 00:07:40.818 Test: blockdev nvme admin passthru ...passed 00:07:40.818 Test: blockdev copy ...passed 00:07:40.818 Suite: bdevio tests on: Nvme1n1p2 00:07:40.818 Test: blockdev write read block ...passed 00:07:40.818 Test: blockdev write zeroes read block ...passed 00:07:40.818 Test: blockdev write zeroes read no split ...passed 00:07:40.818 Test: blockdev write zeroes read split ...passed 00:07:40.818 Test: blockdev write zeroes read split partial ...passed 00:07:40.818 Test: blockdev reset ...[2024-11-20 20:49:58.829798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:40.818 [2024-11-20 20:49:58.831348] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:40.818 passed 00:07:40.818 Test: blockdev write read 8 blocks ...passed 00:07:40.818 Test: blockdev write read size > 128k ...passed 00:07:40.818 Test: blockdev write read invalid size ...passed 00:07:40.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.819 Test: blockdev write read max offset ...passed 00:07:40.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.819 Test: blockdev writev readv 8 blocks ...passed 00:07:40.819 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.819 Test: blockdev writev readv block ...passed 00:07:40.819 Test: blockdev writev readv size > 128k ...passed 00:07:40.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.819 Test: blockdev comparev and writev ...[2024-11-20 20:49:58.837384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2aaa02000 len:0x1000 00:07:40.819 [2024-11-20 20:49:58.837424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.819 passed 00:07:40.819 Test: blockdev nvme passthru rw ...passed 00:07:40.819 Test: blockdev nvme passthru vendor specific ...passed 00:07:40.819 Test: blockdev nvme admin passthru ...passed 00:07:40.819 Test: blockdev copy ...passed 00:07:40.819 Suite: bdevio tests on: Nvme1n1p1 00:07:40.819 Test: blockdev write read block ...passed 00:07:40.819 Test: blockdev write zeroes read block ...passed 00:07:40.819 Test: blockdev write zeroes read no split ...passed 00:07:40.819 Test: blockdev write zeroes read split ...passed 00:07:40.819 Test: blockdev write zeroes read split partial ...passed 00:07:40.819 Test: blockdev reset ...[2024-11-20 20:49:58.851416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:40.819 [2024-11-20 20:49:58.852794] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:40.819 passed 00:07:40.819 Test: blockdev write read 8 blocks ...passed 00:07:40.819 Test: blockdev write read size > 128k ...passed 00:07:40.819 Test: blockdev write read invalid size ...passed 00:07:40.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.819 Test: blockdev write read max offset ...passed 00:07:40.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.819 Test: blockdev writev readv 8 blocks ...passed 00:07:40.819 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.819 Test: blockdev writev readv block ...passed 00:07:40.819 Test: blockdev writev readv size > 128k ...passed 00:07:40.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.819 Test: blockdev comparev and writev ...[2024-11-20 20:49:58.858198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2dbe3b000 len:0x1000 00:07:40.819 [2024-11-20 20:49:58.858254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:40.819 passed 00:07:40.819 Test: blockdev nvme passthru rw ...passed 00:07:40.819 Test: blockdev nvme passthru vendor specific ...passed 00:07:40.819 Test: blockdev nvme admin passthru ...passed 00:07:40.819 Test: blockdev copy ...passed 00:07:40.819 Suite: bdevio tests on: Nvme0n1 00:07:40.819 Test: blockdev write read block ...passed 00:07:40.819 Test: blockdev write zeroes read block ...passed 00:07:40.819 Test: blockdev write zeroes read no split ...passed 00:07:40.819 Test: blockdev write zeroes read split ...passed 00:07:40.819 Test: blockdev write zeroes read split partial ...passed 00:07:40.819 Test: blockdev reset ...[2024-11-20 20:49:58.871177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:40.819 [2024-11-20 20:49:58.872506] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:40.819 passed 00:07:40.819 Test: blockdev write read 8 blocks ...passed 00:07:40.819 Test: blockdev write read size > 128k ...passed 00:07:40.819 Test: blockdev write read invalid size ...passed 00:07:40.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:40.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:40.819 Test: blockdev write read max offset ...passed 00:07:40.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:40.819 Test: blockdev writev readv 8 blocks ...passed 00:07:40.819 Test: blockdev writev readv 30 x 1block ...passed 00:07:40.819 Test: blockdev writev readv block ...passed 00:07:40.819 Test: blockdev writev readv size > 128k ...passed 00:07:40.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:40.819 Test: blockdev comparev and writev ...[2024-11-20 20:49:58.876755] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:40.819 separate metadata which is not supported yet. 00:07:40.819 passed 00:07:40.819 Test: blockdev nvme passthru rw ...passed 00:07:40.819 Test: blockdev nvme passthru vendor specific ...[2024-11-20 20:49:58.877315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:40.819 passed 00:07:40.819 Test: blockdev nvme admin passthru ...[2024-11-20 20:49:58.877350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:40.819 passed 00:07:40.819 Test: blockdev copy ...passed 00:07:40.819 00:07:40.819 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.819 suites 7 7 n/a 0 0 00:07:40.819 tests 161 161 161 0 0 00:07:40.819 asserts 1025 1025 1025 0 n/a 00:07:40.819 00:07:40.819 Elapsed time = 0.494 seconds 00:07:40.819 0 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73006 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73006 ']' 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73006 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73006 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.819 killing process with pid 73006 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73006' 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73006 00:07:40.819 20:49:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73006 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:41.080 00:07:41.080 real 0m1.398s 00:07:41.080 user 0m3.553s 00:07:41.080 sys 0m0.275s 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.080 ************************************ 00:07:41.080 END TEST bdev_bounds 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:41.080 ************************************ 00:07:41.080 20:49:59 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:41.080 20:49:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:41.080 20:49:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.080 20:49:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.080 ************************************ 00:07:41.080 START TEST bdev_nbd 00:07:41.080 ************************************ 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73049 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73049 /var/tmp/spdk-nbd.sock 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73049 ']' 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:41.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.080 20:49:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:41.080 [2024-11-20 20:49:59.165099] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:41.080 [2024-11-20 20:49:59.165204] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.339 [2024-11-20 20:49:59.304717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.339 [2024-11-20 20:49:59.327187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.910 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.910 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:41.910 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:41.910 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.910 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.910 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:41.911 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:41.911 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:41.911 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:41.911 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:41.911 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:41.911 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:41.911 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:41.911 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:41.911 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:42.172 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:42.172 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:42.172 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:42.172 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:42.172 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:42.172 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.172 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.172 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.173 1+0 records in 00:07:42.173 1+0 records out 00:07:42.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121864 s, 3.4 MB/s 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:42.173 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.434 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.434 1+0 records in 00:07:42.435 1+0 records out 00:07:42.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000578973 s, 7.1 MB/s 00:07:42.435 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.435 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:42.435 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.435 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.435 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:42.435 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.435 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:42.435 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.697 1+0 records in 00:07:42.697 1+0 records out 00:07:42.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125533 s, 3.3 MB/s 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:42.697 20:50:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:42.959 1+0 records in 00:07:42.959 1+0 records out 00:07:42.959 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120924 s, 3.4 MB/s 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:42.959 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.221 1+0 records in 00:07:43.221 1+0 records out 00:07:43.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120105 s, 3.4 MB/s 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:43.221 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.483 1+0 records in 00:07:43.483 1+0 records out 00:07:43.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114403 s, 3.6 MB/s 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:43.483 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:43.745 1+0 records in 00:07:43.745 1+0 records out 00:07:43.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134937 s, 3.0 MB/s 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:43.745 20:50:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd0", 00:07:44.007 "bdev_name": "Nvme0n1" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd1", 00:07:44.007 "bdev_name": "Nvme1n1p1" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd2", 00:07:44.007 "bdev_name": "Nvme1n1p2" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd3", 00:07:44.007 "bdev_name": "Nvme2n1" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd4", 00:07:44.007 "bdev_name": "Nvme2n2" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd5", 00:07:44.007 "bdev_name": "Nvme2n3" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd6", 00:07:44.007 "bdev_name": "Nvme3n1" 00:07:44.007 } 00:07:44.007 ]' 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd0", 00:07:44.007 "bdev_name": "Nvme0n1" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd1", 00:07:44.007 "bdev_name": "Nvme1n1p1" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd2", 00:07:44.007 "bdev_name": "Nvme1n1p2" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd3", 00:07:44.007 "bdev_name": "Nvme2n1" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd4", 00:07:44.007 "bdev_name": "Nvme2n2" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd5", 00:07:44.007 "bdev_name": "Nvme2n3" 00:07:44.007 }, 00:07:44.007 { 00:07:44.007 "nbd_device": "/dev/nbd6", 00:07:44.007 "bdev_name": "Nvme3n1" 00:07:44.007 } 00:07:44.007 ]' 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.007 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:44.269 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:44.269 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:44.269 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:44.269 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.269 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.269 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:44.269 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.269 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.269 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.269 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:44.530 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:44.530 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:44.530 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:44.530 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.530 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.530 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:44.530 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.530 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.530 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.530 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:44.791 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:44.791 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:44.791 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:44.791 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:44.791 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:44.791 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:44.791 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:44.791 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:44.791 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:44.791 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:45.050 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:45.050 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:45.050 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:45.050 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.050 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.050 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:45.050 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.050 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.050 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.050 20:50:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:45.309 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.568 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:45.826 20:50:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:46.085 /dev/nbd0 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.085 1+0 records in 00:07:46.085 1+0 records out 00:07:46.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000973784 s, 4.2 MB/s 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:46.085 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:46.343 /dev/nbd1 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.343 1+0 records in 00:07:46.343 1+0 records out 00:07:46.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000926152 s, 4.4 MB/s 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:46.343 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:46.601 /dev/nbd10 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.601 1+0 records in 00:07:46.601 1+0 records out 00:07:46.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771764 s, 5.3 MB/s 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:46.601 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:46.859 /dev/nbd11 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.859 1+0 records in 00:07:46.859 1+0 records out 00:07:46.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000930011 s, 4.4 MB/s 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:46.859 20:50:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:46.860 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:46.860 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:46.860 20:50:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:47.118 /dev/nbd12 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.118 1+0 records in 00:07:47.118 1+0 records out 00:07:47.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106813 s, 3.8 MB/s 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.118 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:47.118 /dev/nbd13 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.376 1+0 records in 00:07:47.376 1+0 records out 00:07:47.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116096 s, 3.5 MB/s 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:47.376 /dev/nbd14 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.376 1+0 records in 00:07:47.376 1+0 records out 00:07:47.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112002 s, 3.7 MB/s 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:47.376 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd0", 00:07:47.635 "bdev_name": "Nvme0n1" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd1", 00:07:47.635 "bdev_name": "Nvme1n1p1" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd10", 00:07:47.635 "bdev_name": "Nvme1n1p2" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd11", 00:07:47.635 "bdev_name": "Nvme2n1" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd12", 00:07:47.635 "bdev_name": "Nvme2n2" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd13", 00:07:47.635 "bdev_name": "Nvme2n3" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd14", 00:07:47.635 "bdev_name": "Nvme3n1" 00:07:47.635 } 00:07:47.635 ]' 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd0", 00:07:47.635 "bdev_name": "Nvme0n1" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd1", 00:07:47.635 "bdev_name": "Nvme1n1p1" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd10", 00:07:47.635 "bdev_name": "Nvme1n1p2" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd11", 00:07:47.635 "bdev_name": "Nvme2n1" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd12", 00:07:47.635 "bdev_name": "Nvme2n2" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd13", 00:07:47.635 "bdev_name": "Nvme2n3" 00:07:47.635 }, 00:07:47.635 { 00:07:47.635 "nbd_device": "/dev/nbd14", 00:07:47.635 "bdev_name": "Nvme3n1" 00:07:47.635 } 00:07:47.635 ]' 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:47.635 /dev/nbd1 00:07:47.635 /dev/nbd10 00:07:47.635 /dev/nbd11 00:07:47.635 /dev/nbd12 00:07:47.635 /dev/nbd13 00:07:47.635 /dev/nbd14' 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:47.635 /dev/nbd1 00:07:47.635 /dev/nbd10 00:07:47.635 /dev/nbd11 00:07:47.635 /dev/nbd12 00:07:47.635 /dev/nbd13 00:07:47.635 /dev/nbd14' 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:47.635 256+0 records in 00:07:47.635 256+0 records out 00:07:47.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635308 s, 165 MB/s 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.635 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:47.910 256+0 records in 00:07:47.910 256+0 records out 00:07:47.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152325 s, 6.9 MB/s 00:07:47.910 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:47.910 20:50:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:48.178 256+0 records in 00:07:48.178 256+0 records out 00:07:48.178 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.255822 s, 4.1 MB/s 00:07:48.178 20:50:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.178 20:50:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:48.436 256+0 records in 00:07:48.436 256+0 records out 00:07:48.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148696 s, 7.1 MB/s 00:07:48.436 20:50:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.436 20:50:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:48.436 256+0 records in 00:07:48.436 256+0 records out 00:07:48.436 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176179 s, 6.0 MB/s 00:07:48.436 20:50:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.436 20:50:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:48.694 256+0 records in 00:07:48.694 256+0 records out 00:07:48.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.18825 s, 5.6 MB/s 00:07:48.694 20:50:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.694 20:50:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:48.953 256+0 records in 00:07:48.953 256+0 records out 00:07:48.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.249414 s, 4.2 MB/s 00:07:48.953 20:50:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:48.953 20:50:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:49.214 256+0 records in 00:07:49.214 256+0 records out 00:07:49.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154517 s, 6.8 MB/s 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.214 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:49.474 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:49.474 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:49.474 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:49.474 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.474 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.475 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:49.475 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.475 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.475 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.475 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:49.475 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.736 20:50:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:49.998 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:49.998 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:49.998 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:49.998 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.998 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.998 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:49.998 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:49.998 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.998 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.998 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:50.257 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:50.257 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:50.257 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:50.257 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.257 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.257 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:50.257 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.257 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.257 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.257 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:50.516 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:50.516 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:50.516 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:50.516 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.516 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.516 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:50.516 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.516 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.516 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.516 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:50.774 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:50.775 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:51.035 20:50:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:51.035 malloc_lvol_verify 00:07:51.035 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:51.294 c7001a25-c4fd-4e73-a3c5-fd5cf0fe3e18 00:07:51.294 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:51.554 8a61b115-81bb-4b92-b5ab-9086766dfb15 00:07:51.554 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:51.814 /dev/nbd0 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:51.814 mke2fs 1.47.0 (5-Feb-2023) 00:07:51.814 Discarding device blocks: 0/4096 done 00:07:51.814 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:51.814 00:07:51.814 Allocating group tables: 0/1 done 00:07:51.814 Writing inode tables: 0/1 done 00:07:51.814 Creating journal (1024 blocks): done 00:07:51.814 Writing superblocks and filesystem accounting information: 0/1 done 00:07:51.814 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.814 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73049 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73049 ']' 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73049 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.073 20:50:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73049 00:07:52.073 20:50:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.073 20:50:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.073 20:50:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73049' 00:07:52.073 killing process with pid 73049 00:07:52.073 20:50:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73049 00:07:52.073 20:50:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73049 00:07:52.073 20:50:10 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:52.073 00:07:52.073 real 0m11.068s 00:07:52.073 user 0m15.423s 00:07:52.073 sys 0m3.872s 00:07:52.073 20:50:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.073 20:50:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:52.073 ************************************ 00:07:52.073 END TEST bdev_nbd 00:07:52.073 ************************************ 00:07:52.332 20:50:10 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:52.332 20:50:10 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:52.332 20:50:10 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:52.332 skipping fio tests on NVMe due to multi-ns failures. 00:07:52.332 20:50:10 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:52.332 20:50:10 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:52.332 20:50:10 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:52.332 20:50:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:52.332 20:50:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.332 20:50:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.332 ************************************ 00:07:52.332 START TEST bdev_verify 00:07:52.332 ************************************ 00:07:52.332 20:50:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:52.332 [2024-11-20 20:50:10.296187] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:52.332 [2024-11-20 20:50:10.296301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73465 ] 00:07:52.332 [2024-11-20 20:50:10.437920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:52.591 [2024-11-20 20:50:10.468645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.591 [2024-11-20 20:50:10.468676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.849 Running I/O for 5 seconds... 00:07:55.166 23424.00 IOPS, 91.50 MiB/s [2024-11-20T20:50:14.229Z] 23456.00 IOPS, 91.62 MiB/s [2024-11-20T20:50:15.171Z] 22506.67 IOPS, 87.92 MiB/s [2024-11-20T20:50:16.110Z] 21616.00 IOPS, 84.44 MiB/s [2024-11-20T20:50:16.110Z] 21760.00 IOPS, 85.00 MiB/s 00:07:57.991 Latency(us) 00:07:57.991 [2024-11-20T20:50:16.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.991 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x0 length 0xbd0bd 00:07:57.991 Nvme0n1 : 5.04 1625.78 6.35 0.00 0.00 78447.24 13913.80 74206.92 00:07:57.991 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:57.991 Nvme0n1 : 5.08 1436.23 5.61 0.00 0.00 87972.10 16837.71 76223.41 00:07:57.991 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x0 length 0x4ff80 00:07:57.991 Nvme1n1p1 : 5.07 1629.50 6.37 0.00 0.00 78149.99 11191.53 69367.34 00:07:57.991 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:57.991 Nvme1n1p1 : 5.08 1435.31 5.61 0.00 0.00 87933.54 14518.74 79046.50 00:07:57.991 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x0 length 0x4ff7f 00:07:57.991 Nvme1n1p2 : 5.07 1628.96 6.36 0.00 0.00 78067.05 10737.82 68560.74 00:07:57.991 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:57.991 Nvme1n1p2 : 5.07 1439.12 5.62 0.00 0.00 88717.56 18047.61 81466.29 00:07:57.991 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x0 length 0x80000 00:07:57.991 Nvme2n1 : 5.07 1628.48 6.36 0.00 0.00 77974.40 11191.53 67350.84 00:07:57.991 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x80000 length 0x80000 00:07:57.991 Nvme2n1 : 5.07 1437.82 5.62 0.00 0.00 88561.57 20265.75 72190.42 00:07:57.991 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x0 length 0x80000 00:07:57.991 Nvme2n2 : 5.08 1636.77 6.39 0.00 0.00 77669.50 7813.91 68560.74 00:07:57.991 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x80000 length 0x80000 00:07:57.991 Nvme2n2 : 5.08 1437.40 5.61 0.00 0.00 88401.33 19761.62 70980.53 00:07:57.991 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x0 length 0x80000 00:07:57.991 Nvme2n3 : 5.09 1635.41 6.39 0.00 0.00 77566.64 11090.71 69770.63 00:07:57.991 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x80000 length 0x80000 00:07:57.991 Nvme2n3 : 5.08 1436.98 5.61 0.00 0.00 88272.54 19963.27 69770.63 00:07:57.991 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x0 length 0x20000 00:07:57.991 Nvme3n1 : 5.09 1634.96 6.39 0.00 0.00 77455.73 11040.30 69770.63 00:07:57.991 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:57.991 Verification LBA range: start 0x20000 length 0x20000 00:07:57.991 Nvme3n1 : 5.08 1436.61 5.61 0.00 0.00 88099.22 19358.33 72593.72 00:07:57.991 [2024-11-20T20:50:16.110Z] =================================================================================================================== 00:07:57.991 [2024-11-20T20:50:16.110Z] Total : 21479.34 83.90 0.00 0.00 82765.17 7813.91 81466.29 00:07:58.936 00:07:58.936 real 0m6.542s 00:07:58.936 user 0m12.309s 00:07:58.936 sys 0m0.229s 00:07:58.936 20:50:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.936 20:50:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:58.936 ************************************ 00:07:58.936 END TEST bdev_verify 00:07:58.936 ************************************ 00:07:58.936 20:50:16 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:58.936 20:50:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:58.936 20:50:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.937 20:50:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.937 ************************************ 00:07:58.937 START TEST bdev_verify_big_io 00:07:58.937 ************************************ 00:07:58.937 20:50:16 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:58.937 [2024-11-20 20:50:16.927485] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:07:58.937 [2024-11-20 20:50:16.927647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73558 ] 00:07:59.198 [2024-11-20 20:50:17.070343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:59.198 [2024-11-20 20:50:17.112497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.198 [2024-11-20 20:50:17.112554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.792 Running I/O for 5 seconds... 00:08:05.367 1418.00 IOPS, 88.62 MiB/s [2024-11-20T20:50:24.420Z] 2656.50 IOPS, 166.03 MiB/s [2024-11-20T20:50:24.680Z] 3275.67 IOPS, 204.73 MiB/s 00:08:06.561 Latency(us) 00:08:06.561 [2024-11-20T20:50:24.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.562 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x0 length 0xbd0b 00:08:06.562 Nvme0n1 : 5.64 120.51 7.53 0.00 0.00 1017560.12 35893.56 1116330.14 00:08:06.562 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:06.562 Nvme0n1 : 6.11 49.74 3.11 0.00 0.00 2430089.47 34885.32 2116510.33 00:08:06.562 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x0 length 0x4ff8 00:08:06.562 Nvme1n1p1 : 5.74 111.34 6.96 0.00 0.00 1083684.74 93565.24 1716438.25 00:08:06.562 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:06.562 Nvme1n1p1 : 6.03 63.56 3.97 0.00 0.00 1791170.62 129055.51 1780966.01 00:08:06.562 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x0 length 0x4ff7 00:08:06.562 Nvme1n1p2 : 5.82 125.12 7.82 0.00 0.00 940032.21 75820.11 1542213.32 00:08:06.562 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:06.562 Nvme1n1p2 : 6.12 73.26 4.58 0.00 0.00 1486759.33 43354.58 1780966.01 00:08:06.562 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x0 length 0x8000 00:08:06.562 Nvme2n1 : 5.90 130.28 8.14 0.00 0.00 867544.81 77030.01 1025991.29 00:08:06.562 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x8000 length 0x8000 00:08:06.562 Nvme2n1 : 6.18 82.71 5.17 0.00 0.00 1256870.58 38111.70 1806777.11 00:08:06.562 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x0 length 0x8000 00:08:06.562 Nvme2n2 : 5.90 135.15 8.45 0.00 0.00 820148.11 77030.01 1051802.39 00:08:06.562 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x8000 length 0x8000 00:08:06.562 Nvme2n2 : 6.34 101.92 6.37 0.00 0.00 976367.00 32465.53 1832588.21 00:08:06.562 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x0 length 0x8000 00:08:06.562 Nvme2n3 : 6.01 144.58 9.04 0.00 0.00 746340.12 28432.54 1071160.71 00:08:06.562 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x8000 length 0x8000 00:08:06.562 Nvme2n3 : 6.57 159.61 9.98 0.00 0.00 596615.68 15930.29 1871304.86 00:08:06.562 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x0 length 0x2000 00:08:06.562 Nvme3n1 : 6.02 159.56 9.97 0.00 0.00 662967.06 920.02 1096971.82 00:08:06.562 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:06.562 Verification LBA range: start 0x2000 length 0x2000 00:08:06.562 Nvme3n1 : 6.83 281.02 17.56 0.00 0.00 324109.87 204.80 1910021.51 00:08:06.562 [2024-11-20T20:50:24.681Z] =================================================================================================================== 00:08:06.562 [2024-11-20T20:50:24.681Z] Total : 1738.36 108.65 0.00 0.00 867653.72 204.80 2116510.33 00:08:07.504 00:08:07.504 real 0m8.587s 00:08:07.504 user 0m16.256s 00:08:07.504 sys 0m0.357s 00:08:07.504 20:50:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.504 ************************************ 00:08:07.504 END TEST bdev_verify_big_io 00:08:07.504 ************************************ 00:08:07.504 20:50:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:07.504 20:50:25 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.504 20:50:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:07.504 20:50:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.504 20:50:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:07.504 ************************************ 00:08:07.504 START TEST bdev_write_zeroes 00:08:07.504 ************************************ 00:08:07.504 20:50:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:07.504 [2024-11-20 20:50:25.563431] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:07.504 [2024-11-20 20:50:25.563554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73667 ] 00:08:07.766 [2024-11-20 20:50:25.709717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.766 [2024-11-20 20:50:25.739415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.340 Running I/O for 1 seconds... 00:08:09.283 48832.00 IOPS, 190.75 MiB/s 00:08:09.283 Latency(us) 00:08:09.283 [2024-11-20T20:50:27.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.283 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.283 Nvme0n1 : 1.03 6964.94 27.21 0.00 0.00 18330.15 11443.59 35288.62 00:08:09.283 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.283 Nvme1n1p1 : 1.03 6956.20 27.17 0.00 0.00 18323.00 14014.62 35288.62 00:08:09.283 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.283 Nvme1n1p2 : 1.03 6947.59 27.14 0.00 0.00 18246.99 14115.45 31860.58 00:08:09.283 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.283 Nvme2n1 : 1.03 6939.73 27.11 0.00 0.00 18210.62 13913.80 31255.63 00:08:09.283 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.283 Nvme2n2 : 1.03 6931.74 27.08 0.00 0.00 18172.14 11292.36 30650.68 00:08:09.283 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.283 Nvme2n3 : 1.04 6923.92 27.05 0.00 0.00 18138.67 11645.24 30247.38 00:08:09.283 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:09.283 Nvme3n1 : 1.04 6916.15 27.02 0.00 0.00 18115.61 10687.41 30650.68 00:08:09.283 [2024-11-20T20:50:27.402Z] =================================================================================================================== 00:08:09.283 [2024-11-20T20:50:27.402Z] Total : 48580.27 189.77 0.00 0.00 18219.60 10687.41 35288.62 00:08:09.543 00:08:09.543 real 0m2.035s 00:08:09.543 user 0m1.681s 00:08:09.543 sys 0m0.235s 00:08:09.543 20:50:27 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.543 ************************************ 00:08:09.543 END TEST bdev_write_zeroes 00:08:09.543 ************************************ 00:08:09.543 20:50:27 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:09.543 20:50:27 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.543 20:50:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:09.543 20:50:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.543 20:50:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.543 ************************************ 00:08:09.543 START TEST bdev_json_nonenclosed 00:08:09.543 ************************************ 00:08:09.543 20:50:27 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:09.801 [2024-11-20 20:50:27.677705] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:09.801 [2024-11-20 20:50:27.677825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73709 ] 00:08:09.801 [2024-11-20 20:50:27.822876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.801 [2024-11-20 20:50:27.849209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.801 [2024-11-20 20:50:27.849313] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:09.801 [2024-11-20 20:50:27.849330] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:09.801 [2024-11-20 20:50:27.849346] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.062 00:08:10.062 real 0m0.312s 00:08:10.062 user 0m0.123s 00:08:10.062 sys 0m0.086s 00:08:10.062 20:50:27 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.062 20:50:27 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:10.062 ************************************ 00:08:10.062 END TEST bdev_json_nonenclosed 00:08:10.062 ************************************ 00:08:10.062 20:50:27 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:10.062 20:50:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:10.062 20:50:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.062 20:50:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:10.062 ************************************ 00:08:10.063 START TEST bdev_json_nonarray 00:08:10.063 ************************************ 00:08:10.063 20:50:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:10.063 [2024-11-20 20:50:28.055267] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:10.063 [2024-11-20 20:50:28.055380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73729 ] 00:08:10.323 [2024-11-20 20:50:28.202234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.323 [2024-11-20 20:50:28.229964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.323 [2024-11-20 20:50:28.230069] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:10.323 [2024-11-20 20:50:28.230090] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:10.323 [2024-11-20 20:50:28.230104] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:10.323 00:08:10.323 real 0m0.314s 00:08:10.323 user 0m0.116s 00:08:10.323 sys 0m0.094s 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.323 ************************************ 00:08:10.323 END TEST bdev_json_nonarray 00:08:10.323 ************************************ 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:10.323 20:50:28 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:10.323 20:50:28 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:10.323 20:50:28 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:10.323 20:50:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:10.323 20:50:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.323 20:50:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:10.323 ************************************ 00:08:10.323 START TEST bdev_gpt_uuid 00:08:10.323 ************************************ 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73755 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 73755 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 73755 ']' 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.323 20:50:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:10.584 [2024-11-20 20:50:28.444908] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:10.584 [2024-11-20 20:50:28.445036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73755 ] 00:08:10.584 [2024-11-20 20:50:28.589968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.584 [2024-11-20 20:50:28.618696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.546 Some configs were skipped because the RPC state that can call them passed over. 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:11.546 { 00:08:11.546 "name": "Nvme1n1p1", 00:08:11.546 "aliases": [ 00:08:11.546 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:11.546 ], 00:08:11.546 "product_name": "GPT Disk", 00:08:11.546 "block_size": 4096, 00:08:11.546 "num_blocks": 655104, 00:08:11.546 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:11.546 "assigned_rate_limits": { 00:08:11.546 "rw_ios_per_sec": 0, 00:08:11.546 "rw_mbytes_per_sec": 0, 00:08:11.546 "r_mbytes_per_sec": 0, 00:08:11.546 "w_mbytes_per_sec": 0 00:08:11.546 }, 00:08:11.546 "claimed": false, 00:08:11.546 "zoned": false, 00:08:11.546 "supported_io_types": { 00:08:11.546 "read": true, 00:08:11.546 "write": true, 00:08:11.546 "unmap": true, 00:08:11.546 "flush": true, 00:08:11.546 "reset": true, 00:08:11.546 "nvme_admin": false, 00:08:11.546 "nvme_io": false, 00:08:11.546 "nvme_io_md": false, 00:08:11.546 "write_zeroes": true, 00:08:11.546 "zcopy": false, 00:08:11.546 "get_zone_info": false, 00:08:11.546 "zone_management": false, 00:08:11.546 "zone_append": false, 00:08:11.546 "compare": true, 00:08:11.546 "compare_and_write": false, 00:08:11.546 "abort": true, 00:08:11.546 "seek_hole": false, 00:08:11.546 "seek_data": false, 00:08:11.546 "copy": true, 00:08:11.546 "nvme_iov_md": false 00:08:11.546 }, 00:08:11.546 "driver_specific": { 00:08:11.546 "gpt": { 00:08:11.546 "base_bdev": "Nvme1n1", 00:08:11.546 "offset_blocks": 256, 00:08:11.546 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:11.546 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:11.546 "partition_name": "SPDK_TEST_first" 00:08:11.546 } 00:08:11.546 } 00:08:11.546 } 00:08:11.546 ]' 00:08:11.546 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:11.808 { 00:08:11.808 "name": "Nvme1n1p2", 00:08:11.808 "aliases": [ 00:08:11.808 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:11.808 ], 00:08:11.808 "product_name": "GPT Disk", 00:08:11.808 "block_size": 4096, 00:08:11.808 "num_blocks": 655103, 00:08:11.808 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:11.808 "assigned_rate_limits": { 00:08:11.808 "rw_ios_per_sec": 0, 00:08:11.808 "rw_mbytes_per_sec": 0, 00:08:11.808 "r_mbytes_per_sec": 0, 00:08:11.808 "w_mbytes_per_sec": 0 00:08:11.808 }, 00:08:11.808 "claimed": false, 00:08:11.808 "zoned": false, 00:08:11.808 "supported_io_types": { 00:08:11.808 "read": true, 00:08:11.808 "write": true, 00:08:11.808 "unmap": true, 00:08:11.808 "flush": true, 00:08:11.808 "reset": true, 00:08:11.808 "nvme_admin": false, 00:08:11.808 "nvme_io": false, 00:08:11.808 "nvme_io_md": false, 00:08:11.808 "write_zeroes": true, 00:08:11.808 "zcopy": false, 00:08:11.808 "get_zone_info": false, 00:08:11.808 "zone_management": false, 00:08:11.808 "zone_append": false, 00:08:11.808 "compare": true, 00:08:11.808 "compare_and_write": false, 00:08:11.808 "abort": true, 00:08:11.808 "seek_hole": false, 00:08:11.808 "seek_data": false, 00:08:11.808 "copy": true, 00:08:11.808 "nvme_iov_md": false 00:08:11.808 }, 00:08:11.808 "driver_specific": { 00:08:11.808 "gpt": { 00:08:11.808 "base_bdev": "Nvme1n1", 00:08:11.808 "offset_blocks": 655360, 00:08:11.808 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:11.808 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:11.808 "partition_name": "SPDK_TEST_second" 00:08:11.808 } 00:08:11.808 } 00:08:11.808 } 00:08:11.808 ]' 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 73755 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 73755 ']' 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 73755 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73755 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.808 killing process with pid 73755 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73755' 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 73755 00:08:11.808 20:50:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 73755 00:08:12.380 00:08:12.380 real 0m2.010s 00:08:12.380 user 0m2.040s 00:08:12.380 sys 0m0.496s 00:08:12.380 20:50:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.380 ************************************ 00:08:12.380 END TEST bdev_gpt_uuid 00:08:12.380 ************************************ 00:08:12.380 20:50:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:12.380 20:50:30 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:12.380 20:50:30 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:12.380 20:50:30 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:12.380 20:50:30 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:12.380 20:50:30 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.380 20:50:30 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:12.380 20:50:30 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:12.380 20:50:30 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:12.380 20:50:30 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:12.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:12.953 Waiting for block devices as requested 00:08:12.953 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:12.953 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:13.214 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:13.214 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:18.497 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:18.497 20:50:36 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:18.497 20:50:36 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:18.497 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:18.497 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:18.497 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:18.497 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:18.497 20:50:36 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:18.497 00:08:18.497 real 0m50.779s 00:08:18.497 user 1m3.947s 00:08:18.497 sys 0m8.688s 00:08:18.497 20:50:36 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.497 20:50:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:18.497 ************************************ 00:08:18.498 END TEST blockdev_nvme_gpt 00:08:18.498 ************************************ 00:08:18.498 20:50:36 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:18.498 20:50:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.498 20:50:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.498 20:50:36 -- common/autotest_common.sh@10 -- # set +x 00:08:18.498 ************************************ 00:08:18.498 START TEST nvme 00:08:18.498 ************************************ 00:08:18.498 20:50:36 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:18.757 * Looking for test storage... 00:08:18.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:18.757 20:50:36 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.757 20:50:36 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.757 20:50:36 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.757 20:50:36 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.757 20:50:36 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.757 20:50:36 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.757 20:50:36 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.757 20:50:36 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.757 20:50:36 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.757 20:50:36 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.757 20:50:36 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.757 20:50:36 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.757 20:50:36 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.757 20:50:36 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.757 20:50:36 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.757 20:50:36 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:18.757 20:50:36 nvme -- scripts/common.sh@345 -- # : 1 00:08:18.757 20:50:36 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.757 20:50:36 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.757 20:50:36 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:18.757 20:50:36 nvme -- scripts/common.sh@353 -- # local d=1 00:08:18.757 20:50:36 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.757 20:50:36 nvme -- scripts/common.sh@355 -- # echo 1 00:08:18.757 20:50:36 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.757 20:50:36 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:18.757 20:50:36 nvme -- scripts/common.sh@353 -- # local d=2 00:08:18.757 20:50:36 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.757 20:50:36 nvme -- scripts/common.sh@355 -- # echo 2 00:08:18.757 20:50:36 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.757 20:50:36 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.757 20:50:36 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.757 20:50:36 nvme -- scripts/common.sh@368 -- # return 0 00:08:18.757 20:50:36 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.757 20:50:36 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.757 --rc genhtml_branch_coverage=1 00:08:18.757 --rc genhtml_function_coverage=1 00:08:18.757 --rc genhtml_legend=1 00:08:18.757 --rc geninfo_all_blocks=1 00:08:18.757 --rc geninfo_unexecuted_blocks=1 00:08:18.757 00:08:18.757 ' 00:08:18.757 20:50:36 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.757 --rc genhtml_branch_coverage=1 00:08:18.757 --rc genhtml_function_coverage=1 00:08:18.757 --rc genhtml_legend=1 00:08:18.757 --rc geninfo_all_blocks=1 00:08:18.757 --rc geninfo_unexecuted_blocks=1 00:08:18.757 00:08:18.757 ' 00:08:18.758 20:50:36 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.758 --rc genhtml_branch_coverage=1 00:08:18.758 --rc genhtml_function_coverage=1 00:08:18.758 --rc genhtml_legend=1 00:08:18.758 --rc geninfo_all_blocks=1 00:08:18.758 --rc geninfo_unexecuted_blocks=1 00:08:18.758 00:08:18.758 ' 00:08:18.758 20:50:36 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.758 --rc genhtml_branch_coverage=1 00:08:18.758 --rc genhtml_function_coverage=1 00:08:18.758 --rc genhtml_legend=1 00:08:18.758 --rc geninfo_all_blocks=1 00:08:18.758 --rc geninfo_unexecuted_blocks=1 00:08:18.758 00:08:18.758 ' 00:08:18.758 20:50:36 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:19.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:19.894 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.894 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.894 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.894 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:19.894 20:50:37 nvme -- nvme/nvme.sh@79 -- # uname 00:08:19.894 20:50:37 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:19.894 20:50:37 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:19.894 20:50:37 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:19.894 20:50:37 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:19.894 20:50:37 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:19.894 20:50:37 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:19.894 Waiting for stub to ready for secondary processes... 00:08:19.894 20:50:37 nvme -- common/autotest_common.sh@1075 -- # stubpid=74387 00:08:19.894 20:50:37 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:19.894 20:50:37 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:19.894 20:50:37 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74387 ]] 00:08:19.894 20:50:37 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:19.894 20:50:37 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:19.894 [2024-11-20 20:50:37.875970] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:08:19.894 [2024-11-20 20:50:37.876105] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:20.829 [2024-11-20 20:50:38.627571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.829 [2024-11-20 20:50:38.640388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.829 [2024-11-20 20:50:38.640674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.829 [2024-11-20 20:50:38.640789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.829 [2024-11-20 20:50:38.651730] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:20.829 [2024-11-20 20:50:38.651774] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.829 [2024-11-20 20:50:38.663554] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:20.829 [2024-11-20 20:50:38.663733] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:20.829 [2024-11-20 20:50:38.664425] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.829 [2024-11-20 20:50:38.664858] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:20.829 [2024-11-20 20:50:38.665009] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:20.829 [2024-11-20 20:50:38.666257] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.829 [2024-11-20 20:50:38.666595] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:20.829 [2024-11-20 20:50:38.666716] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:20.829 [2024-11-20 20:50:38.668935] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:20.829 [2024-11-20 20:50:38.669147] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:20.829 [2024-11-20 20:50:38.669217] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:20.829 [2024-11-20 20:50:38.669264] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:20.829 [2024-11-20 20:50:38.669307] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:20.829 done. 00:08:20.829 20:50:38 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:20.829 20:50:38 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:20.829 20:50:38 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:20.829 20:50:38 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:20.829 20:50:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.829 20:50:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:20.829 ************************************ 00:08:20.829 START TEST nvme_reset 00:08:20.829 ************************************ 00:08:20.829 20:50:38 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:21.087 Initializing NVMe Controllers 00:08:21.087 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:21.087 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:21.087 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:21.087 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:21.087 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:21.087 00:08:21.087 real 0m0.181s 00:08:21.088 user 0m0.056s 00:08:21.088 sys 0m0.078s 00:08:21.088 20:50:39 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.088 ************************************ 00:08:21.088 END TEST nvme_reset 00:08:21.088 ************************************ 00:08:21.088 20:50:39 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:21.088 20:50:39 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:21.088 20:50:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:21.088 20:50:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.088 20:50:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.088 ************************************ 00:08:21.088 START TEST nvme_identify 00:08:21.088 ************************************ 00:08:21.088 20:50:39 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:21.088 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:21.088 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:21.088 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:21.088 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:21.088 20:50:39 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:21.088 20:50:39 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:21.088 20:50:39 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:21.088 20:50:39 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:21.088 20:50:39 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:21.088 20:50:39 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:21.088 20:50:39 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:21.088 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:21.348 ===================================================== 00:08:21.348 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.348 ===================================================== 00:08:21.348 Controller Capabilities/Features 00:08:21.348 ================================ 00:08:21.348 Vendor ID: 1b36 00:08:21.348 Subsystem Vendor ID: 1af4 00:08:21.348 Serial Number: 12340 00:08:21.348 Model Number: QEMU NVMe Ctrl 00:08:21.348 Firmware Version: 8.0.0 00:08:21.348 Recommended Arb Burst: 6 00:08:21.348 IEEE OUI Identifier: 00 54 52 00:08:21.348 Multi-path I/O 00:08:21.348 May have multiple subsystem ports: No 00:08:21.348 May have multiple controllers: No 00:08:21.348 Associated with SR-IOV VF: No 00:08:21.348 Max Data Transfer Size: 524288 00:08:21.348 Max Number of Namespaces: 256 00:08:21.348 Max Number of I/O Queues: 64 00:08:21.348 NVMe Specification Version (VS): 1.4 00:08:21.348 NVMe Specification Version (Identify): 1.4 00:08:21.348 Maximum Queue Entries: 2048 00:08:21.348 Contiguous Queues Required: Yes 00:08:21.348 Arbitration Mechanisms Supported 00:08:21.348 Weighted Round Robin: Not Supported 00:08:21.348 Vendor Specific: Not Supported 00:08:21.348 Reset Timeout: 7500 ms 00:08:21.348 Doorbell Stride: 4 bytes 00:08:21.348 NVM Subsystem Reset: Not Supported 00:08:21.348 Command Sets Supported 00:08:21.348 NVM Command Set: Supported 00:08:21.348 Boot Partition: Not Supported 00:08:21.348 Memory Page Size Minimum: 4096 bytes 00:08:21.348 Memory Page Size Maximum: 65536 bytes 00:08:21.348 Persistent Memory Region: Not Supported 00:08:21.348 Optional Asynchronous Events Supported 00:08:21.348 Namespace Attribute Notices: Supported 00:08:21.348 Firmware Activation Notices: Not Supported 00:08:21.348 ANA Change Notices: Not Supported 00:08:21.348 PLE Aggregate Log Change Notices: Not Supported 00:08:21.348 LBA Status Info Alert Notices: Not Supported 00:08:21.348 EGE Aggregate Log Change Notices: Not Supported 00:08:21.348 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.348 Zone Descriptor Change Notices: Not Supported 00:08:21.348 Discovery Log Change Notices: Not Supported 00:08:21.348 Controller Attributes 00:08:21.348 128-bit Host Identifier: Not Supported 00:08:21.348 Non-Operational Permissive Mode: Not Supported 00:08:21.348 NVM Sets: Not Supported 00:08:21.348 Read Recovery Levels: Not Supported 00:08:21.348 Endurance Groups: Not Supported 00:08:21.348 Predictable Latency Mode: Not Supported 00:08:21.348 Traffic Based Keep ALive: Not Supported 00:08:21.348 Namespace Granularity: Not Supported 00:08:21.348 SQ Associations: Not Supported 00:08:21.348 UUID List: Not Supported 00:08:21.348 Multi-Domain Subsystem: Not Supported 00:08:21.348 Fixed Capacity Management: Not Supported 00:08:21.348 Variable Capacity Management: Not Supported 00:08:21.348 Delete Endurance Group: Not Supported 00:08:21.348 Delete NVM Set: Not Supported 00:08:21.348 Extended LBA Formats Supported: Supported 00:08:21.348 Flexible Data Placement Supported: Not Supported 00:08:21.348 00:08:21.348 Controller Memory Buffer Support 00:08:21.348 ================================ 00:08:21.348 Supported: No 00:08:21.348 00:08:21.348 Persistent Memory Region Support 00:08:21.348 ================================ 00:08:21.348 Supported: No 00:08:21.348 00:08:21.348 Admin Command Set Attributes 00:08:21.348 ============================ 00:08:21.348 Security Send/Receive: Not Supported 00:08:21.348 Format NVM: Supported 00:08:21.348 Firmware Activate/Download: Not Supported 00:08:21.348 Namespace Management: Supported 00:08:21.348 Device Self-Test: Not Supported 00:08:21.348 Directives: Supported 00:08:21.348 NVMe-MI: Not Supported 00:08:21.348 Virtualization Management: Not Supported 00:08:21.348 Doorbell Buffer Config: Supported 00:08:21.348 Get LBA Status Capability: Not Supported 00:08:21.348 Command & Feature Lockdown Capability: Not Supported 00:08:21.348 Abort Command Limit: 4 00:08:21.348 Async Event Request Limit: 4 00:08:21.348 Number of Firmware Slots: N/A 00:08:21.348 Firmware Slot 1 Read-Only: N/A 00:08:21.348 Firmware Activation Without Reset: N/A 00:08:21.348 Multiple Update Detection Support: N/A 00:08:21.348 Firmware Update Granularity: No Information Provided 00:08:21.348 Per-Namespace SMART Log: Yes 00:08:21.348 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.348 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:21.348 Command Effects Log Page: Supported 00:08:21.348 Get Log Page Extended Data: Supported 00:08:21.348 Telemetry Log Pages: Not Supported 00:08:21.348 Persistent Event Log Pages: Not Supported 00:08:21.348 Supported Log Pages Log Page: May Support 00:08:21.348 Commands Supported & Effects Log Page: Not Supported 00:08:21.348 Feature Identifiers & Effects Log Page:May Support 00:08:21.348 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.348 Data Area 4 for Telemetry Log: Not Supported 00:08:21.348 Error Log Page Entries Supported: 1 00:08:21.348 Keep Alive: Not Supported 00:08:21.348 00:08:21.348 NVM Command Set Attributes 00:08:21.348 ========================== 00:08:21.348 Submission Queue Entry Size 00:08:21.348 Max: 64 00:08:21.348 Min: 64 00:08:21.348 Completion Queue Entry Size 00:08:21.348 Max: 16 00:08:21.348 Min: 16 00:08:21.348 Number of Namespaces: 256 00:08:21.348 Compare Command: Supported 00:08:21.348 Write Uncorrectable Command: Not Supported 00:08:21.348 Dataset Management Command: Supported 00:08:21.348 Write Zeroes Command: Supported 00:08:21.348 Set Features Save Field: Supported 00:08:21.348 Reservations: Not Supported 00:08:21.348 Timestamp: Supported 00:08:21.348 Copy: Supported 00:08:21.348 Volatile Write Cache: Present 00:08:21.348 Atomic Write Unit (Normal): 1 00:08:21.348 Atomic Write Unit (PFail): 1 00:08:21.348 Atomic Compare & Write Unit: 1 00:08:21.348 Fused Compare & Write: Not Supported 00:08:21.348 Scatter-Gather List 00:08:21.348 SGL Command Set: Supported 00:08:21.348 SGL Keyed: Not Supported 00:08:21.348 SGL Bit Bucket Descriptor: Not Supported 00:08:21.348 SGL Metadata Pointer: Not Supported 00:08:21.348 Oversized SGL: Not Supported 00:08:21.348 SGL Metadata Address: Not Supported 00:08:21.348 SGL Offset: Not Supported 00:08:21.348 Transport SGL Data Block: Not Supported 00:08:21.348 Replay Protected Memory Block: Not Supported 00:08:21.348 00:08:21.348 Firmware Slot Information 00:08:21.348 ========================= 00:08:21.348 Active slot: 1 00:08:21.348 Slot 1 Firmware Revision: 1.0 00:08:21.348 00:08:21.348 00:08:21.348 Commands Supported and Effects 00:08:21.348 ============================== 00:08:21.348 Admin Commands 00:08:21.348 -------------- 00:08:21.348 Delete I/O Submission Queue (00h): Supported 00:08:21.348 Create I/O Submission Queue (01h): Supported 00:08:21.348 Get Log Page (02h): Supported 00:08:21.348 Delete I/O Completion Queue (04h): Supported 00:08:21.348 Create I/O Completion Queue (05h): Supported 00:08:21.348 Identify (06h): Supported 00:08:21.348 Abort (08h): Supported 00:08:21.348 Set Features (09h): Supported 00:08:21.348 Get Features (0Ah): Supported 00:08:21.348 Asynchronous Event Request (0Ch): Supported 00:08:21.348 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.348 Directive Send (19h): Supported 00:08:21.348 Directive Receive (1Ah): Supported 00:08:21.348 Virtualization Management (1Ch): Supported 00:08:21.348 Doorbell Buffer Config (7Ch): Supported 00:08:21.348 Format NVM (80h): Supported LBA-Change 00:08:21.348 I/O Commands 00:08:21.348 ------------ 00:08:21.348 Flush (00h): Supported LBA-Change 00:08:21.348 Write (01h): Supported LBA-Change 00:08:21.348 Read (02h): Supported 00:08:21.348 Compare (05h): Supported 00:08:21.348 Write Zeroes (08h): Supported LBA-Change 00:08:21.348 Dataset Management (09h): Supported LBA-Change 00:08:21.348 Unknown (0Ch): Supported 00:08:21.348 Unknown (12h): Supported 00:08:21.348 Copy (19h): Supported LBA-Change 00:08:21.348 Unknown (1Dh): Supported LBA-Change 00:08:21.348 00:08:21.348 Error Log 00:08:21.348 ========= 00:08:21.348 00:08:21.348 Arbitration 00:08:21.348 =========== 00:08:21.348 Arbitration Burst: no limit 00:08:21.348 00:08:21.348 Power Management 00:08:21.348 ================ 00:08:21.348 Number of Power States: 1 00:08:21.348 Current Power State: Power State #0 00:08:21.348 Power State #0: 00:08:21.348 Max Power: 25.00 W 00:08:21.348 Non-Operational State: Operational 00:08:21.348 Entry Latency: 16 microseconds 00:08:21.348 Exit Latency: 4 microseconds 00:08:21.348 Relative Read Throughput: 0 00:08:21.348 Relative Read Latency: 0 00:08:21.348 Relative Write Throughput: 0 00:08:21.348 Relative Write Latency: 0 00:08:21.348 Idle Power: Not Reported 00:08:21.348 Active Power: Not Reported 00:08:21.348 Non-Operational Permissive Mode: Not Supported 00:08:21.348 00:08:21.348 Health Information 00:08:21.348 ================== 00:08:21.348 Critical Warnings: 00:08:21.348 Available Spare Space: OK 00:08:21.348 Temperature: OK 00:08:21.349 Device Reliability: OK 00:08:21.349 Read Only: No 00:08:21.349 Volatile Memory Backup: OK 00:08:21.349 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.349 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.349 Available Spare: 0% 00:08:21.349 Available Spare Threshold: 0% 00:08:21.349 Life Percentage Used: 0% 00:08:21.349 Data Units Read: 663 00:08:21.349 Data Units Written: 591 00:08:21.349 Host Read Commands: 35436 00:08:21.349 Host Write Commands: 35222 00:08:21.349 Controller Busy Time: 0 minutes 00:08:21.349 Power Cycles: 0 00:08:21.349 Power On Hours: 0 hours 00:08:21.349 Unsafe Shutdowns: 0 00:08:21.349 Unrecoverable Media Errors: 0 00:08:21.349 Lifetime Error Log Entries: 0 00:08:21.349 Warning Temperature Time: 0 minutes 00:08:21.349 Critical Temperature Time: 0 minutes 00:08:21.349 00:08:21.349 Number of Queues 00:08:21.349 ================ 00:08:21.349 Number of I/O Submission Queues: 64 00:08:21.349 Number of I/O Completion Queues: 64 00:08:21.349 00:08:21.349 ZNS Specific Controller Data 00:08:21.349 ============================ 00:08:21.349 Zone Append Size Limit: 0 00:08:21.349 00:08:21.349 00:08:21.349 Active Namespaces 00:08:21.349 ================= 00:08:21.349 Namespace ID:1 00:08:21.349 Error Recovery Timeout: Unlimited 00:08:21.349 Command Set Identifier: NVM (00h) 00:08:21.349 Deallocate: Supported 00:08:21.349 Deallocated/Unwritten Error: Supported 00:08:21.349 Deallocated Read Value: All 0x00 00:08:21.349 Deallocate in Write Zeroes: Not Supported 00:08:21.349 Deallocated Guard Field: 0xFFFF 00:08:21.349 Flush: Supported 00:08:21.349 Reservation: Not Supported 00:08:21.349 Metadata Transferred as: Separate Metadata Buffer 00:08:21.349 Namespace Sharing Capabilities: Private 00:08:21.349 Size (in LBAs): 1548666 (5GiB) 00:08:21.349 Capacity (in LBAs): 1548666 (5GiB) 00:08:21.349 Utilization (in LBAs): 1548666 (5GiB) 00:08:21.349 Thin Provisioning: Not Supported 00:08:21.349 Per-NS Atomic Units: No 00:08:21.349 Maximum Single Source Range Length: 128 00:08:21.349 Maximum Copy Length: 128 00:08:21.349 Maximum Source Range Count: 128 00:08:21.349 NGUID/EUI64 Never Reused: No 00:08:21.349 Namespace Write Protected: No 00:08:21.349 Number of LBA Formats: 8 00:08:21.349 Current LBA Format: LBA Format #07 00:08:21.349 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.349 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.349 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.349 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.349 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.349 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.349 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.349 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.349 00:08:21.349 NVM Specific Namespace Data 00:08:21.349 =========================== 00:08:21.349 Logical Block Storage Tag Mask: 0 00:08:21.349 Protection Information Capabilities: 00:08:21.349 16b Guard Protection Information Storage Tag Support: No 00:08:21.349 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.349 Storage Tag Check Read Support: No 00:08:21.349 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.349 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.349 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.349 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.349 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.349 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.349 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.349 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.349 ===================================================== 00:08:21.349 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.349 ===================================================== 00:08:21.349 Controller Capabilities/Features 00:08:21.349 ================================ 00:08:21.349 Vendor ID: 1b36 00:08:21.349 Subsystem Vendor ID: 1af4 00:08:21.349 Serial Number: 12341 00:08:21.349 Model Number: QEMU NVMe Ctrl 00:08:21.349 Firmware Version: 8.0.0 00:08:21.349 Recommended Arb Burst: 6 00:08:21.349 IEEE OUI Identifier: 00 54 52 00:08:21.349 Multi-path I/O 00:08:21.349 May have multiple subsystem ports: No 00:08:21.349 May have multiple controllers: No 00:08:21.349 Associated with SR-IOV VF: No 00:08:21.349 Max Data Transfer Size: 524288 00:08:21.349 Max Number of Namespaces: 256 00:08:21.349 Max Number of I/O Queues: 64 00:08:21.349 NVMe Specification Version (VS): 1.4 00:08:21.349 NVMe Specification Version (Identify): 1.4 00:08:21.349 Maximum Queue Entries: 2048 00:08:21.349 Contiguous Queues Required: Yes 00:08:21.349 Arbitration Mechanisms Supported 00:08:21.349 Weighted Round Robin: Not Supported 00:08:21.349 Vendor Specific: Not Supported 00:08:21.349 Reset Timeout: 7500 ms 00:08:21.349 Doorbell Stride: 4 bytes 00:08:21.349 NVM Subsystem Reset: Not Supported 00:08:21.349 Command Sets Supported 00:08:21.349 NVM Command Set: Supported 00:08:21.349 Boot Partition: Not Supported 00:08:21.349 Memory Page Size Minimum: 4096 bytes 00:08:21.349 Memory Page Size Maximum: 65536 bytes 00:08:21.349 Persistent Memory Region: Not Supported 00:08:21.349 Optional Asynchronous Events Supported 00:08:21.349 Namespace Attribute Notices: Supported 00:08:21.349 Firmware Activation Notices: Not Supported 00:08:21.349 ANA Change Notices: Not Supported 00:08:21.349 PLE Aggregate Log Change Notices: Not Supported 00:08:21.349 LBA Status Info Alert Notices: Not Supported 00:08:21.349 EGE Aggregate Log Change Notices: Not Supported 00:08:21.349 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.349 Zone Descriptor Change Notices: Not Supported 00:08:21.349 Discovery Log Change Notices: Not Supported 00:08:21.349 Controller Attributes 00:08:21.349 128-bit Host Identifier: Not Supported 00:08:21.349 Non-Operational Permissive Mode: Not Supported 00:08:21.349 NVM Sets: Not Supported 00:08:21.349 Read Recovery Levels: Not Supported 00:08:21.349 Endurance Groups: Not Supported 00:08:21.349 Predictable Latency Mode: Not Supported 00:08:21.349 Traffic Based Keep ALive: Not Supported 00:08:21.349 Namespace Granularity: Not Supported 00:08:21.349 SQ Associations: Not Supported 00:08:21.349 UUID List: Not Supported 00:08:21.349 Multi-Domain Subsystem: Not Supported 00:08:21.349 Fixed Capacity Management: Not Supported 00:08:21.349 Variable Capacity Management: Not Supported 00:08:21.349 Delete Endurance Group: Not Supported 00:08:21.349 Delete NVM Set: Not Supported 00:08:21.349 Extended LBA Formats Supported: Supported 00:08:21.349 Flexible Data Placement Supported: Not Supported 00:08:21.349 00:08:21.349 Controller Memory Buffer Support 00:08:21.349 ================================ 00:08:21.349 Supported: No 00:08:21.349 00:08:21.349 Persistent Memory Region Support 00:08:21.349 ================================ 00:08:21.349 Supported: No 00:08:21.349 00:08:21.349 Admin Command Set Attributes 00:08:21.349 ============================ 00:08:21.349 Security Send/Receive: Not Supported 00:08:21.349 Format NVM: Supported 00:08:21.349 Firmware Activate/Download: Not Supported 00:08:21.349 Namespace Management: Supported 00:08:21.349 Device Self-Test: Not Supported 00:08:21.349 Directives: Supported 00:08:21.349 NVMe-MI: Not Supported 00:08:21.349 Virtualization Management: Not Supported 00:08:21.349 Doorbell Buffer Config: Supported 00:08:21.349 Get LBA Status Capability: Not Supported 00:08:21.349 Command & Feature Lockdown Capability: Not Supported 00:08:21.349 Abort Command Limit: 4 00:08:21.349 Async Event Request Limit: 4 00:08:21.349 Number of Firmware Slots: N/A 00:08:21.349 Firmware Slot 1 Read-Only: N/A 00:08:21.349 Firmware Activation Without Reset: N/A 00:08:21.349 Multiple Update Detection Support: N/A 00:08:21.349 Firmware Update Granularity: No Information Provided 00:08:21.349 Per-Namespace SMART Log: Yes 00:08:21.349 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.349 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:21.349 Command Effects Log Page: Supported 00:08:21.349 Get Log Page Extended Data: Supported 00:08:21.349 Telemetry Log Pages: Not Supported 00:08:21.349 Persistent Event Log Pages: Not Supported 00:08:21.349 Supported Log Pages Log Page: May Support 00:08:21.349 Commands Supported & Effects Log Page: Not Supported 00:08:21.349 Feature Identifiers & Effects Log Page:May Support 00:08:21.349 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.349 Data Area 4 for Telemetry Log: Not Supported 00:08:21.349 Error Log Page Entries Supported: 1 00:08:21.349 Keep Alive: Not Supported 00:08:21.349 00:08:21.349 NVM Command Set Attributes 00:08:21.349 ========================== 00:08:21.349 Submission Queue Entry Size 00:08:21.349 Max: 64 00:08:21.349 Min: 64 00:08:21.349 Completion Queue Entry Size 00:08:21.349 Max: 16 00:08:21.349 Min: 16 00:08:21.349 Number of Namespaces: 256 00:08:21.350 Compare Command: Supported 00:08:21.350 Write Uncorrectable Command: Not Supported 00:08:21.350 Dataset Management Command: Supported 00:08:21.350 Write Zeroes Command: Supported 00:08:21.350 Set Features Save Field: Supported 00:08:21.350 Reservations: Not Supported 00:08:21.350 Timestamp: Supported 00:08:21.350 Copy: Supported 00:08:21.350 Volatile Write Cache: Present 00:08:21.350 Atomic Write Unit (Normal): 1 00:08:21.350 Atomic Write Unit (PFail): 1 00:08:21.350 Atomic Compare & Write Unit: 1 00:08:21.350 Fused Compare & Write: Not Supported 00:08:21.350 Scatter-Gather List 00:08:21.350 SGL Command Set: Supported 00:08:21.350 SGL Keyed: Not Supported 00:08:21.350 SGL Bit Bucket Descriptor: Not Supported 00:08:21.350 SGL Metadata Pointer: Not Supported 00:08:21.350 Oversized SGL: Not Supported 00:08:21.350 SGL Metadata Address: Not Supported 00:08:21.350 SGL Offset: Not Supported 00:08:21.350 Transport SGL Data Block: Not Supported 00:08:21.350 Replay Protected Memory Block: Not Supported 00:08:21.350 00:08:21.350 Firmware Slot Information 00:08:21.350 ========================= 00:08:21.350 Active slot: 1 00:08:21.350 Slot 1 Firmware Revision: 1.0 00:08:21.350 00:08:21.350 00:08:21.350 Commands Supported and Effects 00:08:21.350 ============================== 00:08:21.350 Admin Commands 00:08:21.350 -------------- 00:08:21.350 Delete I/O Submission Queue (00h): Supported 00:08:21.350 Create I/O Submission Queue (01h): Supported 00:08:21.350 Get Log Page (02h): Supported 00:08:21.350 Delete I/O Completion Queue (04h): Supported 00:08:21.350 Create I/O Completion Queue (05h): Supported 00:08:21.350 Identify (06h): Supported 00:08:21.350 Abort (08h): Supported 00:08:21.350 Set Features (09h): Supported 00:08:21.350 Get Features (0Ah): Supported 00:08:21.350 Asynchronous Event Request (0Ch): Supported 00:08:21.350 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.350 Directive Send (19h): Supported 00:08:21.350 Directive Receive (1Ah): Supported 00:08:21.350 Virtualization Management (1Ch): Supported 00:08:21.350 Doorbell Buffer Config (7Ch): Supported 00:08:21.350 Format NVM (80h): Supported LBA-Change 00:08:21.350 I/O Commands 00:08:21.350 ------------ 00:08:21.350 Flush (00h): Supported LBA-Change 00:08:21.350 Write (01h): Supported LBA-Change 00:08:21.350 Read (02h): Supported 00:08:21.350 Compare (05h): Supported 00:08:21.350 Write Zeroes (08h): Supported LBA-Change 00:08:21.350 Dataset Management (09h): Supported LBA-Change 00:08:21.350 Unknown (0Ch): Supported 00:08:21.350 Unknown (12h): Supported 00:08:21.350 Copy (19h): Supported LBA-Change 00:08:21.350 Unknown (1Dh): Supported LBA-Change 00:08:21.350 00:08:21.350 Error Log 00:08:21.350 ========= 00:08:21.350 00:08:21.350 Arbitration 00:08:21.350 =========== 00:08:21.350 Arbitration Burst: no limit 00:08:21.350 00:08:21.350 Power Management 00:08:21.350 ================ 00:08:21.350 Number of Power States: 1 00:08:21.350 Current Power State: Power State #0 00:08:21.350 Power State #0: 00:08:21.350 Max Power: 25.00 W 00:08:21.350 Non-Operational State: Operational 00:08:21.350 Entry Latency: 16 microseconds 00:08:21.350 Exit Latency: 4 microseconds 00:08:21.350 Relative Read Throughput: 0 00:08:21.350 Relative Read Latency: 0 00:08:21.350 Relative Write Throughput: 0 00:08:21.350 Relative Write Latency: 0 00:08:21.350 Idle Power: Not Reported 00:08:21.350 Active Power: Not Reported 00:08:21.350 Non-Operational Permissive Mode: Not Supported 00:08:21.350 00:08:21.350 Health Information 00:08:21.350 ================== 00:08:21.350 Critical Warnings: 00:08:21.350 Available Spare Space: OK 00:08:21.350 Temperature: OK 00:08:21.350 Device Reliability: OK 00:08:21.350 Read Only: No 00:08:21.350 Volatile Memory Backup: OK 00:08:21.350 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.350 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.350 Available Spare: 0% 00:08:21.350 Available Spare Threshold: 0% 00:08:21.350 Life Percentage Used: 0% 00:08:21.350 Data Units Read: 1005 00:08:21.350 Data Units Written: 875 00:08:21.350 Host Read Commands: 53246 00:08:21.350 Host Write Commands: 52087 00:08:21.350 Controller Busy Time: 0 minutes 00:08:21.350 Power Cycles: 0 00:08:21.350 Power On Hours: 0 hours 00:08:21.350 Unsafe Shutdowns: 0 00:08:21.350 Unrecoverable Media Errors: 0 00:08:21.350 Lifetime Error Log Entries: 0 00:08:21.350 Warning Temperature Time: 0 minutes 00:08:21.350 Critical Temperature Time: 0 minutes 00:08:21.350 00:08:21.350 Number of Queues 00:08:21.350 ================ 00:08:21.350 Number of I/O Submission Queues: 64 00:08:21.350 Number of I/O Completion Queues: 64 00:08:21.350 00:08:21.350 ZNS Specific Controller Data 00:08:21.350 ============================ 00:08:21.350 Zone Append Size Limit: 0 00:08:21.350 00:08:21.350 00:08:21.350 Active Namespaces 00:08:21.350 ================= 00:08:21.350 Namespace ID:1 00:08:21.350 Error Recovery Timeout: Unlimited 00:08:21.350 Command Set Identifier: NVM (00h) 00:08:21.350 Deallocate: Supported 00:08:21.350 Deallocated/Unwritten Error: Supported 00:08:21.350 Deallocated Read Value: All 0x00 00:08:21.350 Deallocate in Write Zeroes: Not Supported 00:08:21.350 Deallocated Guard Field: 0xFFFF 00:08:21.350 Flush: Supported 00:08:21.350 Reservation: Not Supported 00:08:21.350 Namespace Sharing Capabilities: Private 00:08:21.350 Size (in LBAs): 1310720 (5GiB) 00:08:21.350 Capacity (in LBAs): 1310720 (5GiB) 00:08:21.350 Utilization (in LBAs): 1310720 (5GiB) 00:08:21.350 Thin Provisioning: Not Supported 00:08:21.350 Per-NS Atomic Units: No 00:08:21.350 Maximum Single Source Range Length: 128 00:08:21.350 Maximum Copy Length: 128 00:08:21.350 Maximum Source Range Count: 128 00:08:21.350 NGUID/EUI64 Never Reused: No 00:08:21.350 Namespace Write Protected: No 00:08:21.350 Number of LBA Formats: 8 00:08:21.350 Current LBA Format: LBA Format #04 00:08:21.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.350 00:08:21.350 NVM Specific Namespace Data 00:08:21.350 =========================== 00:08:21.350 Logical Block Storage Tag Mask: 0 00:08:21.350 Protection Information Capabilities: 00:08:21.350 16b Guard Protection Information Storage Tag Support: No 00:08:21.350 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.350 Storage Tag Check Read Support: No 00:08:21.350 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.350 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.350 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.350 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.350 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.350 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.350 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.350 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.350 ===================================================== 00:08:21.350 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:21.350 ===================================================== 00:08:21.350 Controller Capabilities/Features 00:08:21.350 ================================ 00:08:21.350 Vendor ID: 1b36 00:08:21.350 Subsystem Vendor ID: 1af4 00:08:21.350 Serial Number: 12343 00:08:21.350 Model Number: QEMU NVMe Ctrl 00:08:21.350 Firmware Version: 8.0.0 00:08:21.350 Recommended Arb Burst: 6 00:08:21.350 IEEE OUI Identifier: 00 54 52 00:08:21.350 Multi-path I/O 00:08:21.350 May have multiple subsystem ports: No 00:08:21.350 May have multiple controllers: Yes 00:08:21.350 Associated with SR-IOV VF: No 00:08:21.350 Max Data Transfer Size: 524288 00:08:21.350 Max Number of Namespaces: 256 00:08:21.350 Max Number of I/O Queues: 64 00:08:21.350 NVMe Specification Version (VS): 1.4 00:08:21.350 NVMe Specification Version (Identify): 1.4 00:08:21.350 Maximum Queue Entries: 2048 00:08:21.350 Contiguous Queues Required: Yes 00:08:21.350 Arbitration Mechanisms Supported 00:08:21.350 Weighted Round Robin: Not Supported 00:08:21.350 Vendor Specific: Not Supported 00:08:21.350 Reset Timeout: 7500 ms 00:08:21.350 Doorbell Stride: 4 bytes 00:08:21.351 NVM Subsystem Reset: Not Supported 00:08:21.351 Command Sets Supported 00:08:21.351 NVM Command Set: Supported 00:08:21.351 Boot Partition: Not Supported 00:08:21.351 Memory Page Size Minimum: 4096 bytes 00:08:21.351 Memory Page Size Maximum: 65536 bytes 00:08:21.351 Persistent Memory Region: Not Supported 00:08:21.351 Optional Asynchronous Events Supported 00:08:21.351 Namespace Attribute Notices: Supported 00:08:21.351 Firmware Activation Notices: Not Supported 00:08:21.351 ANA Change Notices: Not Supported 00:08:21.351 PLE Aggregate Log Change Notices: Not Supported 00:08:21.351 LBA Status Info Alert Notices: Not Supported 00:08:21.351 EGE Aggregate Log Change Notices: Not Supported 00:08:21.351 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.351 Zone Descriptor Change Notices: Not Supported 00:08:21.351 Discovery Log Change Notices: Not Supported 00:08:21.351 Controller Attributes 00:08:21.351 128-bit Host Identifier: Not Supported 00:08:21.351 Non-Operational Permissive Mode: Not Supported 00:08:21.351 NVM Sets: Not Supported 00:08:21.351 Read Recovery Levels: Not Supported 00:08:21.351 Endurance Groups: Supported 00:08:21.351 Predictable Latency Mode: Not Supported 00:08:21.351 Traffic Based Keep ALive: Not Supported 00:08:21.351 Namespace Granularity: Not Supported 00:08:21.351 SQ Associations: Not Supported 00:08:21.351 UUID List: Not Supported 00:08:21.351 Multi-Domain Subsystem: Not Supported 00:08:21.351 Fixed Capacity Management: Not Supported 00:08:21.351 Variable Capacity Management: Not Supported 00:08:21.351 Delete Endurance Group: Not Supported 00:08:21.351 Delete NVM Set: Not Supported 00:08:21.351 Extended LBA Formats Supported: Supported 00:08:21.351 Flexible Data Placement Supported: Supported 00:08:21.351 00:08:21.351 Controller Memory Buffer Support 00:08:21.351 ================================ 00:08:21.351 Supported: No 00:08:21.351 00:08:21.351 Persistent Memory Region Support 00:08:21.351 ================================ 00:08:21.351 Supported: No 00:08:21.351 00:08:21.351 Admin Command Set Attributes 00:08:21.351 ============================ 00:08:21.351 Security Send/Receive: Not Supported 00:08:21.351 Format NVM: Supported 00:08:21.351 Firmware Activate/Download: Not Supported 00:08:21.351 Namespace Management: Supported 00:08:21.351 Device Self-Test: Not Supported 00:08:21.351 Directives: Supported 00:08:21.351 NVMe-MI: Not Supported 00:08:21.351 Virtualization Management: Not Supported 00:08:21.351 Doorbell Buffer Config: Supported 00:08:21.351 Get LBA Status Capability: Not Supported 00:08:21.351 Command & Feature Lockdown Capability: Not Supported 00:08:21.351 Abort Command Limit: 4 00:08:21.351 Async Event Request Limit: 4 00:08:21.351 Number of Firmware Slots: N/A 00:08:21.351 Firmware Slot 1 Read-Only: N/A 00:08:21.351 Firmware Activation Without Reset: N/A 00:08:21.351 Multiple Update Detection Support: N/A 00:08:21.351 Firmware Update Granularity: No Information Provided 00:08:21.351 Per-Namespace SMART Log: Yes 00:08:21.351 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.351 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:21.351 Command Effects Log Page: Supported 00:08:21.351 Get Log Page Extended Data: Supported 00:08:21.351 Telemetry Log Pages: Not Supported 00:08:21.351 Persistent Event Log Pages: Not Supported 00:08:21.351 Supported Log Pages Log Page: May Support 00:08:21.351 Commands Supported & Effects Log Page: Not Supported 00:08:21.351 Feature Identifiers & Effec[2024-11-20 20:50:39.327089] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74408 terminated unexpected 00:08:21.351 [2024-11-20 20:50:39.328009] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74408 terminated unexpected 00:08:21.351 [2024-11-20 20:50:39.328611] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74408 terminated unexpected 00:08:21.351 ts Log Page:May Support 00:08:21.351 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.351 Data Area 4 for Telemetry Log: Not Supported 00:08:21.351 Error Log Page Entries Supported: 1 00:08:21.351 Keep Alive: Not Supported 00:08:21.351 00:08:21.351 NVM Command Set Attributes 00:08:21.351 ========================== 00:08:21.351 Submission Queue Entry Size 00:08:21.351 Max: 64 00:08:21.351 Min: 64 00:08:21.351 Completion Queue Entry Size 00:08:21.351 Max: 16 00:08:21.351 Min: 16 00:08:21.351 Number of Namespaces: 256 00:08:21.351 Compare Command: Supported 00:08:21.351 Write Uncorrectable Command: Not Supported 00:08:21.351 Dataset Management Command: Supported 00:08:21.351 Write Zeroes Command: Supported 00:08:21.351 Set Features Save Field: Supported 00:08:21.351 Reservations: Not Supported 00:08:21.351 Timestamp: Supported 00:08:21.351 Copy: Supported 00:08:21.351 Volatile Write Cache: Present 00:08:21.351 Atomic Write Unit (Normal): 1 00:08:21.351 Atomic Write Unit (PFail): 1 00:08:21.351 Atomic Compare & Write Unit: 1 00:08:21.351 Fused Compare & Write: Not Supported 00:08:21.351 Scatter-Gather List 00:08:21.351 SGL Command Set: Supported 00:08:21.351 SGL Keyed: Not Supported 00:08:21.351 SGL Bit Bucket Descriptor: Not Supported 00:08:21.351 SGL Metadata Pointer: Not Supported 00:08:21.351 Oversized SGL: Not Supported 00:08:21.351 SGL Metadata Address: Not Supported 00:08:21.351 SGL Offset: Not Supported 00:08:21.351 Transport SGL Data Block: Not Supported 00:08:21.351 Replay Protected Memory Block: Not Supported 00:08:21.351 00:08:21.351 Firmware Slot Information 00:08:21.351 ========================= 00:08:21.351 Active slot: 1 00:08:21.351 Slot 1 Firmware Revision: 1.0 00:08:21.351 00:08:21.351 00:08:21.351 Commands Supported and Effects 00:08:21.351 ============================== 00:08:21.351 Admin Commands 00:08:21.351 -------------- 00:08:21.351 Delete I/O Submission Queue (00h): Supported 00:08:21.351 Create I/O Submission Queue (01h): Supported 00:08:21.351 Get Log Page (02h): Supported 00:08:21.351 Delete I/O Completion Queue (04h): Supported 00:08:21.351 Create I/O Completion Queue (05h): Supported 00:08:21.351 Identify (06h): Supported 00:08:21.351 Abort (08h): Supported 00:08:21.351 Set Features (09h): Supported 00:08:21.351 Get Features (0Ah): Supported 00:08:21.351 Asynchronous Event Request (0Ch): Supported 00:08:21.351 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.351 Directive Send (19h): Supported 00:08:21.351 Directive Receive (1Ah): Supported 00:08:21.351 Virtualization Management (1Ch): Supported 00:08:21.351 Doorbell Buffer Config (7Ch): Supported 00:08:21.351 Format NVM (80h): Supported LBA-Change 00:08:21.351 I/O Commands 00:08:21.351 ------------ 00:08:21.351 Flush (00h): Supported LBA-Change 00:08:21.351 Write (01h): Supported LBA-Change 00:08:21.351 Read (02h): Supported 00:08:21.351 Compare (05h): Supported 00:08:21.351 Write Zeroes (08h): Supported LBA-Change 00:08:21.351 Dataset Management (09h): Supported LBA-Change 00:08:21.351 Unknown (0Ch): Supported 00:08:21.351 Unknown (12h): Supported 00:08:21.351 Copy (19h): Supported LBA-Change 00:08:21.351 Unknown (1Dh): Supported LBA-Change 00:08:21.351 00:08:21.351 Error Log 00:08:21.351 ========= 00:08:21.351 00:08:21.351 Arbitration 00:08:21.351 =========== 00:08:21.351 Arbitration Burst: no limit 00:08:21.351 00:08:21.351 Power Management 00:08:21.351 ================ 00:08:21.351 Number of Power States: 1 00:08:21.351 Current Power State: Power State #0 00:08:21.351 Power State #0: 00:08:21.351 Max Power: 25.00 W 00:08:21.351 Non-Operational State: Operational 00:08:21.351 Entry Latency: 16 microseconds 00:08:21.351 Exit Latency: 4 microseconds 00:08:21.351 Relative Read Throughput: 0 00:08:21.351 Relative Read Latency: 0 00:08:21.351 Relative Write Throughput: 0 00:08:21.351 Relative Write Latency: 0 00:08:21.351 Idle Power: Not Reported 00:08:21.351 Active Power: Not Reported 00:08:21.351 Non-Operational Permissive Mode: Not Supported 00:08:21.351 00:08:21.351 Health Information 00:08:21.351 ================== 00:08:21.351 Critical Warnings: 00:08:21.351 Available Spare Space: OK 00:08:21.351 Temperature: OK 00:08:21.351 Device Reliability: OK 00:08:21.351 Read Only: No 00:08:21.351 Volatile Memory Backup: OK 00:08:21.351 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.351 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.351 Available Spare: 0% 00:08:21.351 Available Spare Threshold: 0% 00:08:21.351 Life Percentage Used: 0% 00:08:21.351 Data Units Read: 979 00:08:21.351 Data Units Written: 908 00:08:21.351 Host Read Commands: 38208 00:08:21.351 Host Write Commands: 37631 00:08:21.351 Controller Busy Time: 0 minutes 00:08:21.351 Power Cycles: 0 00:08:21.351 Power On Hours: 0 hours 00:08:21.351 Unsafe Shutdowns: 0 00:08:21.351 Unrecoverable Media Errors: 0 00:08:21.352 Lifetime Error Log Entries: 0 00:08:21.352 Warning Temperature Time: 0 minutes 00:08:21.352 Critical Temperature Time: 0 minutes 00:08:21.352 00:08:21.352 Number of Queues 00:08:21.352 ================ 00:08:21.352 Number of I/O Submission Queues: 64 00:08:21.352 Number of I/O Completion Queues: 64 00:08:21.352 00:08:21.352 ZNS Specific Controller Data 00:08:21.352 ============================ 00:08:21.352 Zone Append Size Limit: 0 00:08:21.352 00:08:21.352 00:08:21.352 Active Namespaces 00:08:21.352 ================= 00:08:21.352 Namespace ID:1 00:08:21.352 Error Recovery Timeout: Unlimited 00:08:21.352 Command Set Identifier: NVM (00h) 00:08:21.352 Deallocate: Supported 00:08:21.352 Deallocated/Unwritten Error: Supported 00:08:21.352 Deallocated Read Value: All 0x00 00:08:21.352 Deallocate in Write Zeroes: Not Supported 00:08:21.352 Deallocated Guard Field: 0xFFFF 00:08:21.352 Flush: Supported 00:08:21.352 Reservation: Not Supported 00:08:21.352 Namespace Sharing Capabilities: Multiple Controllers 00:08:21.352 Size (in LBAs): 262144 (1GiB) 00:08:21.352 Capacity (in LBAs): 262144 (1GiB) 00:08:21.352 Utilization (in LBAs): 262144 (1GiB) 00:08:21.352 Thin Provisioning: Not Supported 00:08:21.352 Per-NS Atomic Units: No 00:08:21.352 Maximum Single Source Range Length: 128 00:08:21.352 Maximum Copy Length: 128 00:08:21.352 Maximum Source Range Count: 128 00:08:21.352 NGUID/EUI64 Never Reused: No 00:08:21.352 Namespace Write Protected: No 00:08:21.352 Endurance group ID: 1 00:08:21.352 Number of LBA Formats: 8 00:08:21.352 Current LBA Format: LBA Format #04 00:08:21.352 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.352 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.352 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.352 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.352 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.352 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.352 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.352 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.352 00:08:21.352 Get Feature FDP: 00:08:21.352 ================ 00:08:21.352 Enabled: Yes 00:08:21.352 FDP configuration index: 0 00:08:21.352 00:08:21.352 FDP configurations log page 00:08:21.352 =========================== 00:08:21.352 Number of FDP configurations: 1 00:08:21.352 Version: 0 00:08:21.352 Size: 112 00:08:21.352 FDP Configuration Descriptor: 0 00:08:21.352 Descriptor Size: 96 00:08:21.352 Reclaim Group Identifier format: 2 00:08:21.352 FDP Volatile Write Cache: Not Present 00:08:21.352 FDP Configuration: Valid 00:08:21.352 Vendor Specific Size: 0 00:08:21.352 Number of Reclaim Groups: 2 00:08:21.352 Number of Recalim Unit Handles: 8 00:08:21.352 Max Placement Identifiers: 128 00:08:21.352 Number of Namespaces Suppprted: 256 00:08:21.352 Reclaim unit Nominal Size: 6000000 bytes 00:08:21.352 Estimated Reclaim Unit Time Limit: Not Reported 00:08:21.352 RUH Desc #000: RUH Type: Initially Isolated 00:08:21.352 RUH Desc #001: RUH Type: Initially Isolated 00:08:21.352 RUH Desc #002: RUH Type: Initially Isolated 00:08:21.352 RUH Desc #003: RUH Type: Initially Isolated 00:08:21.352 RUH Desc #004: RUH Type: Initially Isolated 00:08:21.352 RUH Desc #005: RUH Type: Initially Isolated 00:08:21.352 RUH Desc #006: RUH Type: Initially Isolated 00:08:21.352 RUH Desc #007: RUH Type: Initially Isolated 00:08:21.352 00:08:21.352 FDP reclaim unit handle usage log page 00:08:21.352 ====================================== 00:08:21.352 Number of Reclaim Unit Handles: 8 00:08:21.352 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:21.352 RUH Usage Desc #001: RUH Attributes: Unused 00:08:21.352 RUH Usage Desc #002: RUH Attributes: Unused 00:08:21.352 RUH Usage Desc #003: RUH Attributes: Unused 00:08:21.352 RUH Usage Desc #004: RUH Attributes: Unused 00:08:21.352 RUH Usage Desc #005: RUH Attributes: Unused 00:08:21.352 RUH Usage Desc #006: RUH Attributes: Unused 00:08:21.352 RUH Usage Desc #007: RUH Attributes: Unused 00:08:21.352 00:08:21.352 FDP statistics log page 00:08:21.352 ======================= 00:08:21.352 Host bytes with metadata written: 544251904 00:08:21.352 Medi[2024-11-20 20:50:39.330592] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74408 terminated unexpected 00:08:21.352 a bytes with metadata written: 544329728 00:08:21.352 Media bytes erased: 0 00:08:21.352 00:08:21.352 FDP events log page 00:08:21.352 =================== 00:08:21.352 Number of FDP events: 0 00:08:21.352 00:08:21.352 NVM Specific Namespace Data 00:08:21.352 =========================== 00:08:21.352 Logical Block Storage Tag Mask: 0 00:08:21.352 Protection Information Capabilities: 00:08:21.352 16b Guard Protection Information Storage Tag Support: No 00:08:21.352 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.352 Storage Tag Check Read Support: No 00:08:21.352 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.352 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.352 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.352 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.352 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.352 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.352 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.352 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.352 ===================================================== 00:08:21.352 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.352 ===================================================== 00:08:21.352 Controller Capabilities/Features 00:08:21.352 ================================ 00:08:21.352 Vendor ID: 1b36 00:08:21.352 Subsystem Vendor ID: 1af4 00:08:21.352 Serial Number: 12342 00:08:21.352 Model Number: QEMU NVMe Ctrl 00:08:21.352 Firmware Version: 8.0.0 00:08:21.352 Recommended Arb Burst: 6 00:08:21.352 IEEE OUI Identifier: 00 54 52 00:08:21.352 Multi-path I/O 00:08:21.352 May have multiple subsystem ports: No 00:08:21.352 May have multiple controllers: No 00:08:21.352 Associated with SR-IOV VF: No 00:08:21.352 Max Data Transfer Size: 524288 00:08:21.352 Max Number of Namespaces: 256 00:08:21.352 Max Number of I/O Queues: 64 00:08:21.352 NVMe Specification Version (VS): 1.4 00:08:21.352 NVMe Specification Version (Identify): 1.4 00:08:21.352 Maximum Queue Entries: 2048 00:08:21.352 Contiguous Queues Required: Yes 00:08:21.352 Arbitration Mechanisms Supported 00:08:21.352 Weighted Round Robin: Not Supported 00:08:21.352 Vendor Specific: Not Supported 00:08:21.352 Reset Timeout: 7500 ms 00:08:21.352 Doorbell Stride: 4 bytes 00:08:21.352 NVM Subsystem Reset: Not Supported 00:08:21.352 Command Sets Supported 00:08:21.352 NVM Command Set: Supported 00:08:21.352 Boot Partition: Not Supported 00:08:21.352 Memory Page Size Minimum: 4096 bytes 00:08:21.352 Memory Page Size Maximum: 65536 bytes 00:08:21.352 Persistent Memory Region: Not Supported 00:08:21.352 Optional Asynchronous Events Supported 00:08:21.352 Namespace Attribute Notices: Supported 00:08:21.352 Firmware Activation Notices: Not Supported 00:08:21.352 ANA Change Notices: Not Supported 00:08:21.352 PLE Aggregate Log Change Notices: Not Supported 00:08:21.352 LBA Status Info Alert Notices: Not Supported 00:08:21.352 EGE Aggregate Log Change Notices: Not Supported 00:08:21.352 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.352 Zone Descriptor Change Notices: Not Supported 00:08:21.353 Discovery Log Change Notices: Not Supported 00:08:21.353 Controller Attributes 00:08:21.353 128-bit Host Identifier: Not Supported 00:08:21.353 Non-Operational Permissive Mode: Not Supported 00:08:21.353 NVM Sets: Not Supported 00:08:21.353 Read Recovery Levels: Not Supported 00:08:21.353 Endurance Groups: Not Supported 00:08:21.353 Predictable Latency Mode: Not Supported 00:08:21.353 Traffic Based Keep ALive: Not Supported 00:08:21.353 Namespace Granularity: Not Supported 00:08:21.353 SQ Associations: Not Supported 00:08:21.353 UUID List: Not Supported 00:08:21.353 Multi-Domain Subsystem: Not Supported 00:08:21.353 Fixed Capacity Management: Not Supported 00:08:21.353 Variable Capacity Management: Not Supported 00:08:21.353 Delete Endurance Group: Not Supported 00:08:21.353 Delete NVM Set: Not Supported 00:08:21.353 Extended LBA Formats Supported: Supported 00:08:21.353 Flexible Data Placement Supported: Not Supported 00:08:21.353 00:08:21.353 Controller Memory Buffer Support 00:08:21.353 ================================ 00:08:21.353 Supported: No 00:08:21.353 00:08:21.353 Persistent Memory Region Support 00:08:21.353 ================================ 00:08:21.353 Supported: No 00:08:21.353 00:08:21.353 Admin Command Set Attributes 00:08:21.353 ============================ 00:08:21.353 Security Send/Receive: Not Supported 00:08:21.353 Format NVM: Supported 00:08:21.353 Firmware Activate/Download: Not Supported 00:08:21.353 Namespace Management: Supported 00:08:21.353 Device Self-Test: Not Supported 00:08:21.353 Directives: Supported 00:08:21.353 NVMe-MI: Not Supported 00:08:21.353 Virtualization Management: Not Supported 00:08:21.353 Doorbell Buffer Config: Supported 00:08:21.353 Get LBA Status Capability: Not Supported 00:08:21.353 Command & Feature Lockdown Capability: Not Supported 00:08:21.353 Abort Command Limit: 4 00:08:21.353 Async Event Request Limit: 4 00:08:21.353 Number of Firmware Slots: N/A 00:08:21.353 Firmware Slot 1 Read-Only: N/A 00:08:21.353 Firmware Activation Without Reset: N/A 00:08:21.353 Multiple Update Detection Support: N/A 00:08:21.353 Firmware Update Granularity: No Information Provided 00:08:21.353 Per-Namespace SMART Log: Yes 00:08:21.353 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.353 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:21.353 Command Effects Log Page: Supported 00:08:21.353 Get Log Page Extended Data: Supported 00:08:21.353 Telemetry Log Pages: Not Supported 00:08:21.353 Persistent Event Log Pages: Not Supported 00:08:21.353 Supported Log Pages Log Page: May Support 00:08:21.353 Commands Supported & Effects Log Page: Not Supported 00:08:21.353 Feature Identifiers & Effects Log Page:May Support 00:08:21.353 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.353 Data Area 4 for Telemetry Log: Not Supported 00:08:21.353 Error Log Page Entries Supported: 1 00:08:21.353 Keep Alive: Not Supported 00:08:21.353 00:08:21.353 NVM Command Set Attributes 00:08:21.353 ========================== 00:08:21.353 Submission Queue Entry Size 00:08:21.353 Max: 64 00:08:21.353 Min: 64 00:08:21.353 Completion Queue Entry Size 00:08:21.353 Max: 16 00:08:21.353 Min: 16 00:08:21.353 Number of Namespaces: 256 00:08:21.353 Compare Command: Supported 00:08:21.353 Write Uncorrectable Command: Not Supported 00:08:21.353 Dataset Management Command: Supported 00:08:21.353 Write Zeroes Command: Supported 00:08:21.353 Set Features Save Field: Supported 00:08:21.353 Reservations: Not Supported 00:08:21.353 Timestamp: Supported 00:08:21.353 Copy: Supported 00:08:21.353 Volatile Write Cache: Present 00:08:21.353 Atomic Write Unit (Normal): 1 00:08:21.353 Atomic Write Unit (PFail): 1 00:08:21.353 Atomic Compare & Write Unit: 1 00:08:21.353 Fused Compare & Write: Not Supported 00:08:21.353 Scatter-Gather List 00:08:21.353 SGL Command Set: Supported 00:08:21.353 SGL Keyed: Not Supported 00:08:21.353 SGL Bit Bucket Descriptor: Not Supported 00:08:21.353 SGL Metadata Pointer: Not Supported 00:08:21.353 Oversized SGL: Not Supported 00:08:21.353 SGL Metadata Address: Not Supported 00:08:21.353 SGL Offset: Not Supported 00:08:21.353 Transport SGL Data Block: Not Supported 00:08:21.353 Replay Protected Memory Block: Not Supported 00:08:21.353 00:08:21.353 Firmware Slot Information 00:08:21.353 ========================= 00:08:21.353 Active slot: 1 00:08:21.353 Slot 1 Firmware Revision: 1.0 00:08:21.353 00:08:21.353 00:08:21.353 Commands Supported and Effects 00:08:21.353 ============================== 00:08:21.353 Admin Commands 00:08:21.353 -------------- 00:08:21.353 Delete I/O Submission Queue (00h): Supported 00:08:21.353 Create I/O Submission Queue (01h): Supported 00:08:21.353 Get Log Page (02h): Supported 00:08:21.353 Delete I/O Completion Queue (04h): Supported 00:08:21.353 Create I/O Completion Queue (05h): Supported 00:08:21.353 Identify (06h): Supported 00:08:21.353 Abort (08h): Supported 00:08:21.353 Set Features (09h): Supported 00:08:21.353 Get Features (0Ah): Supported 00:08:21.353 Asynchronous Event Request (0Ch): Supported 00:08:21.353 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.353 Directive Send (19h): Supported 00:08:21.353 Directive Receive (1Ah): Supported 00:08:21.353 Virtualization Management (1Ch): Supported 00:08:21.353 Doorbell Buffer Config (7Ch): Supported 00:08:21.353 Format NVM (80h): Supported LBA-Change 00:08:21.353 I/O Commands 00:08:21.353 ------------ 00:08:21.353 Flush (00h): Supported LBA-Change 00:08:21.353 Write (01h): Supported LBA-Change 00:08:21.353 Read (02h): Supported 00:08:21.353 Compare (05h): Supported 00:08:21.353 Write Zeroes (08h): Supported LBA-Change 00:08:21.353 Dataset Management (09h): Supported LBA-Change 00:08:21.353 Unknown (0Ch): Supported 00:08:21.353 Unknown (12h): Supported 00:08:21.353 Copy (19h): Supported LBA-Change 00:08:21.353 Unknown (1Dh): Supported LBA-Change 00:08:21.353 00:08:21.353 Error Log 00:08:21.353 ========= 00:08:21.353 00:08:21.353 Arbitration 00:08:21.353 =========== 00:08:21.353 Arbitration Burst: no limit 00:08:21.353 00:08:21.353 Power Management 00:08:21.353 ================ 00:08:21.353 Number of Power States: 1 00:08:21.353 Current Power State: Power State #0 00:08:21.353 Power State #0: 00:08:21.353 Max Power: 25.00 W 00:08:21.353 Non-Operational State: Operational 00:08:21.353 Entry Latency: 16 microseconds 00:08:21.353 Exit Latency: 4 microseconds 00:08:21.353 Relative Read Throughput: 0 00:08:21.353 Relative Read Latency: 0 00:08:21.353 Relative Write Throughput: 0 00:08:21.353 Relative Write Latency: 0 00:08:21.353 Idle Power: Not Reported 00:08:21.353 Active Power: Not Reported 00:08:21.353 Non-Operational Permissive Mode: Not Supported 00:08:21.353 00:08:21.353 Health Information 00:08:21.353 ================== 00:08:21.353 Critical Warnings: 00:08:21.353 Available Spare Space: OK 00:08:21.353 Temperature: OK 00:08:21.353 Device Reliability: OK 00:08:21.353 Read Only: No 00:08:21.353 Volatile Memory Backup: OK 00:08:21.353 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.353 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.353 Available Spare: 0% 00:08:21.353 Available Spare Threshold: 0% 00:08:21.353 Life Percentage Used: 0% 00:08:21.353 Data Units Read: 2267 00:08:21.353 Data Units Written: 2054 00:08:21.353 Host Read Commands: 109150 00:08:21.353 Host Write Commands: 107419 00:08:21.353 Controller Busy Time: 0 minutes 00:08:21.353 Power Cycles: 0 00:08:21.353 Power On Hours: 0 hours 00:08:21.353 Unsafe Shutdowns: 0 00:08:21.353 Unrecoverable Media Errors: 0 00:08:21.353 Lifetime Error Log Entries: 0 00:08:21.353 Warning Temperature Time: 0 minutes 00:08:21.353 Critical Temperature Time: 0 minutes 00:08:21.353 00:08:21.353 Number of Queues 00:08:21.353 ================ 00:08:21.353 Number of I/O Submission Queues: 64 00:08:21.353 Number of I/O Completion Queues: 64 00:08:21.353 00:08:21.353 ZNS Specific Controller Data 00:08:21.353 ============================ 00:08:21.353 Zone Append Size Limit: 0 00:08:21.353 00:08:21.353 00:08:21.353 Active Namespaces 00:08:21.353 ================= 00:08:21.353 Namespace ID:1 00:08:21.353 Error Recovery Timeout: Unlimited 00:08:21.353 Command Set Identifier: NVM (00h) 00:08:21.353 Deallocate: Supported 00:08:21.353 Deallocated/Unwritten Error: Supported 00:08:21.353 Deallocated Read Value: All 0x00 00:08:21.353 Deallocate in Write Zeroes: Not Supported 00:08:21.353 Deallocated Guard Field: 0xFFFF 00:08:21.353 Flush: Supported 00:08:21.353 Reservation: Not Supported 00:08:21.353 Namespace Sharing Capabilities: Private 00:08:21.354 Size (in LBAs): 1048576 (4GiB) 00:08:21.354 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.354 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.354 Thin Provisioning: Not Supported 00:08:21.354 Per-NS Atomic Units: No 00:08:21.354 Maximum Single Source Range Length: 128 00:08:21.354 Maximum Copy Length: 128 00:08:21.354 Maximum Source Range Count: 128 00:08:21.354 NGUID/EUI64 Never Reused: No 00:08:21.354 Namespace Write Protected: No 00:08:21.354 Number of LBA Formats: 8 00:08:21.354 Current LBA Format: LBA Format #04 00:08:21.354 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.354 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.354 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.354 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.354 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.354 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.354 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.354 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.354 00:08:21.354 NVM Specific Namespace Data 00:08:21.354 =========================== 00:08:21.354 Logical Block Storage Tag Mask: 0 00:08:21.354 Protection Information Capabilities: 00:08:21.354 16b Guard Protection Information Storage Tag Support: No 00:08:21.354 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.354 Storage Tag Check Read Support: No 00:08:21.354 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Namespace ID:2 00:08:21.354 Error Recovery Timeout: Unlimited 00:08:21.354 Command Set Identifier: NVM (00h) 00:08:21.354 Deallocate: Supported 00:08:21.354 Deallocated/Unwritten Error: Supported 00:08:21.354 Deallocated Read Value: All 0x00 00:08:21.354 Deallocate in Write Zeroes: Not Supported 00:08:21.354 Deallocated Guard Field: 0xFFFF 00:08:21.354 Flush: Supported 00:08:21.354 Reservation: Not Supported 00:08:21.354 Namespace Sharing Capabilities: Private 00:08:21.354 Size (in LBAs): 1048576 (4GiB) 00:08:21.354 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.354 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.354 Thin Provisioning: Not Supported 00:08:21.354 Per-NS Atomic Units: No 00:08:21.354 Maximum Single Source Range Length: 128 00:08:21.354 Maximum Copy Length: 128 00:08:21.354 Maximum Source Range Count: 128 00:08:21.354 NGUID/EUI64 Never Reused: No 00:08:21.354 Namespace Write Protected: No 00:08:21.354 Number of LBA Formats: 8 00:08:21.354 Current LBA Format: LBA Format #04 00:08:21.354 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.354 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.354 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.354 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.354 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.354 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.354 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.354 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.354 00:08:21.354 NVM Specific Namespace Data 00:08:21.354 =========================== 00:08:21.354 Logical Block Storage Tag Mask: 0 00:08:21.354 Protection Information Capabilities: 00:08:21.354 16b Guard Protection Information Storage Tag Support: No 00:08:21.354 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.354 Storage Tag Check Read Support: No 00:08:21.354 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Namespace ID:3 00:08:21.354 Error Recovery Timeout: Unlimited 00:08:21.354 Command Set Identifier: NVM (00h) 00:08:21.354 Deallocate: Supported 00:08:21.354 Deallocated/Unwritten Error: Supported 00:08:21.354 Deallocated Read Value: All 0x00 00:08:21.354 Deallocate in Write Zeroes: Not Supported 00:08:21.354 Deallocated Guard Field: 0xFFFF 00:08:21.354 Flush: Supported 00:08:21.354 Reservation: Not Supported 00:08:21.354 Namespace Sharing Capabilities: Private 00:08:21.354 Size (in LBAs): 1048576 (4GiB) 00:08:21.354 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.354 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.354 Thin Provisioning: Not Supported 00:08:21.354 Per-NS Atomic Units: No 00:08:21.354 Maximum Single Source Range Length: 128 00:08:21.354 Maximum Copy Length: 128 00:08:21.354 Maximum Source Range Count: 128 00:08:21.354 NGUID/EUI64 Never Reused: No 00:08:21.354 Namespace Write Protected: No 00:08:21.354 Number of LBA Formats: 8 00:08:21.354 Current LBA Format: LBA Format #04 00:08:21.354 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.354 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.354 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.354 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.354 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.354 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.354 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.354 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.354 00:08:21.354 NVM Specific Namespace Data 00:08:21.354 =========================== 00:08:21.354 Logical Block Storage Tag Mask: 0 00:08:21.354 Protection Information Capabilities: 00:08:21.354 16b Guard Protection Information Storage Tag Support: No 00:08:21.354 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.354 Storage Tag Check Read Support: No 00:08:21.354 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.354 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.354 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:21.613 ===================================================== 00:08:21.613 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:21.613 ===================================================== 00:08:21.613 Controller Capabilities/Features 00:08:21.613 ================================ 00:08:21.613 Vendor ID: 1b36 00:08:21.613 Subsystem Vendor ID: 1af4 00:08:21.613 Serial Number: 12340 00:08:21.613 Model Number: QEMU NVMe Ctrl 00:08:21.613 Firmware Version: 8.0.0 00:08:21.613 Recommended Arb Burst: 6 00:08:21.613 IEEE OUI Identifier: 00 54 52 00:08:21.613 Multi-path I/O 00:08:21.613 May have multiple subsystem ports: No 00:08:21.613 May have multiple controllers: No 00:08:21.613 Associated with SR-IOV VF: No 00:08:21.613 Max Data Transfer Size: 524288 00:08:21.613 Max Number of Namespaces: 256 00:08:21.613 Max Number of I/O Queues: 64 00:08:21.613 NVMe Specification Version (VS): 1.4 00:08:21.613 NVMe Specification Version (Identify): 1.4 00:08:21.613 Maximum Queue Entries: 2048 00:08:21.613 Contiguous Queues Required: Yes 00:08:21.613 Arbitration Mechanisms Supported 00:08:21.613 Weighted Round Robin: Not Supported 00:08:21.613 Vendor Specific: Not Supported 00:08:21.613 Reset Timeout: 7500 ms 00:08:21.613 Doorbell Stride: 4 bytes 00:08:21.613 NVM Subsystem Reset: Not Supported 00:08:21.613 Command Sets Supported 00:08:21.613 NVM Command Set: Supported 00:08:21.613 Boot Partition: Not Supported 00:08:21.613 Memory Page Size Minimum: 4096 bytes 00:08:21.613 Memory Page Size Maximum: 65536 bytes 00:08:21.613 Persistent Memory Region: Not Supported 00:08:21.613 Optional Asynchronous Events Supported 00:08:21.613 Namespace Attribute Notices: Supported 00:08:21.613 Firmware Activation Notices: Not Supported 00:08:21.613 ANA Change Notices: Not Supported 00:08:21.613 PLE Aggregate Log Change Notices: Not Supported 00:08:21.613 LBA Status Info Alert Notices: Not Supported 00:08:21.613 EGE Aggregate Log Change Notices: Not Supported 00:08:21.613 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.613 Zone Descriptor Change Notices: Not Supported 00:08:21.613 Discovery Log Change Notices: Not Supported 00:08:21.613 Controller Attributes 00:08:21.613 128-bit Host Identifier: Not Supported 00:08:21.613 Non-Operational Permissive Mode: Not Supported 00:08:21.613 NVM Sets: Not Supported 00:08:21.613 Read Recovery Levels: Not Supported 00:08:21.613 Endurance Groups: Not Supported 00:08:21.613 Predictable Latency Mode: Not Supported 00:08:21.613 Traffic Based Keep ALive: Not Supported 00:08:21.613 Namespace Granularity: Not Supported 00:08:21.613 SQ Associations: Not Supported 00:08:21.613 UUID List: Not Supported 00:08:21.613 Multi-Domain Subsystem: Not Supported 00:08:21.613 Fixed Capacity Management: Not Supported 00:08:21.613 Variable Capacity Management: Not Supported 00:08:21.613 Delete Endurance Group: Not Supported 00:08:21.613 Delete NVM Set: Not Supported 00:08:21.613 Extended LBA Formats Supported: Supported 00:08:21.613 Flexible Data Placement Supported: Not Supported 00:08:21.613 00:08:21.613 Controller Memory Buffer Support 00:08:21.613 ================================ 00:08:21.613 Supported: No 00:08:21.613 00:08:21.613 Persistent Memory Region Support 00:08:21.613 ================================ 00:08:21.613 Supported: No 00:08:21.613 00:08:21.613 Admin Command Set Attributes 00:08:21.613 ============================ 00:08:21.613 Security Send/Receive: Not Supported 00:08:21.613 Format NVM: Supported 00:08:21.613 Firmware Activate/Download: Not Supported 00:08:21.613 Namespace Management: Supported 00:08:21.613 Device Self-Test: Not Supported 00:08:21.613 Directives: Supported 00:08:21.613 NVMe-MI: Not Supported 00:08:21.613 Virtualization Management: Not Supported 00:08:21.613 Doorbell Buffer Config: Supported 00:08:21.613 Get LBA Status Capability: Not Supported 00:08:21.613 Command & Feature Lockdown Capability: Not Supported 00:08:21.613 Abort Command Limit: 4 00:08:21.613 Async Event Request Limit: 4 00:08:21.613 Number of Firmware Slots: N/A 00:08:21.613 Firmware Slot 1 Read-Only: N/A 00:08:21.613 Firmware Activation Without Reset: N/A 00:08:21.613 Multiple Update Detection Support: N/A 00:08:21.613 Firmware Update Granularity: No Information Provided 00:08:21.613 Per-Namespace SMART Log: Yes 00:08:21.613 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.613 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:21.613 Command Effects Log Page: Supported 00:08:21.613 Get Log Page Extended Data: Supported 00:08:21.613 Telemetry Log Pages: Not Supported 00:08:21.613 Persistent Event Log Pages: Not Supported 00:08:21.613 Supported Log Pages Log Page: May Support 00:08:21.613 Commands Supported & Effects Log Page: Not Supported 00:08:21.613 Feature Identifiers & Effects Log Page:May Support 00:08:21.613 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.613 Data Area 4 for Telemetry Log: Not Supported 00:08:21.613 Error Log Page Entries Supported: 1 00:08:21.613 Keep Alive: Not Supported 00:08:21.613 00:08:21.613 NVM Command Set Attributes 00:08:21.613 ========================== 00:08:21.613 Submission Queue Entry Size 00:08:21.613 Max: 64 00:08:21.613 Min: 64 00:08:21.613 Completion Queue Entry Size 00:08:21.613 Max: 16 00:08:21.613 Min: 16 00:08:21.613 Number of Namespaces: 256 00:08:21.613 Compare Command: Supported 00:08:21.613 Write Uncorrectable Command: Not Supported 00:08:21.613 Dataset Management Command: Supported 00:08:21.613 Write Zeroes Command: Supported 00:08:21.613 Set Features Save Field: Supported 00:08:21.613 Reservations: Not Supported 00:08:21.613 Timestamp: Supported 00:08:21.613 Copy: Supported 00:08:21.613 Volatile Write Cache: Present 00:08:21.613 Atomic Write Unit (Normal): 1 00:08:21.613 Atomic Write Unit (PFail): 1 00:08:21.613 Atomic Compare & Write Unit: 1 00:08:21.613 Fused Compare & Write: Not Supported 00:08:21.613 Scatter-Gather List 00:08:21.613 SGL Command Set: Supported 00:08:21.613 SGL Keyed: Not Supported 00:08:21.613 SGL Bit Bucket Descriptor: Not Supported 00:08:21.613 SGL Metadata Pointer: Not Supported 00:08:21.613 Oversized SGL: Not Supported 00:08:21.613 SGL Metadata Address: Not Supported 00:08:21.613 SGL Offset: Not Supported 00:08:21.613 Transport SGL Data Block: Not Supported 00:08:21.613 Replay Protected Memory Block: Not Supported 00:08:21.613 00:08:21.613 Firmware Slot Information 00:08:21.613 ========================= 00:08:21.613 Active slot: 1 00:08:21.613 Slot 1 Firmware Revision: 1.0 00:08:21.613 00:08:21.613 00:08:21.613 Commands Supported and Effects 00:08:21.613 ============================== 00:08:21.613 Admin Commands 00:08:21.613 -------------- 00:08:21.613 Delete I/O Submission Queue (00h): Supported 00:08:21.613 Create I/O Submission Queue (01h): Supported 00:08:21.613 Get Log Page (02h): Supported 00:08:21.613 Delete I/O Completion Queue (04h): Supported 00:08:21.613 Create I/O Completion Queue (05h): Supported 00:08:21.613 Identify (06h): Supported 00:08:21.613 Abort (08h): Supported 00:08:21.613 Set Features (09h): Supported 00:08:21.613 Get Features (0Ah): Supported 00:08:21.613 Asynchronous Event Request (0Ch): Supported 00:08:21.613 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.613 Directive Send (19h): Supported 00:08:21.613 Directive Receive (1Ah): Supported 00:08:21.613 Virtualization Management (1Ch): Supported 00:08:21.613 Doorbell Buffer Config (7Ch): Supported 00:08:21.613 Format NVM (80h): Supported LBA-Change 00:08:21.613 I/O Commands 00:08:21.613 ------------ 00:08:21.613 Flush (00h): Supported LBA-Change 00:08:21.613 Write (01h): Supported LBA-Change 00:08:21.613 Read (02h): Supported 00:08:21.613 Compare (05h): Supported 00:08:21.613 Write Zeroes (08h): Supported LBA-Change 00:08:21.613 Dataset Management (09h): Supported LBA-Change 00:08:21.613 Unknown (0Ch): Supported 00:08:21.613 Unknown (12h): Supported 00:08:21.613 Copy (19h): Supported LBA-Change 00:08:21.613 Unknown (1Dh): Supported LBA-Change 00:08:21.613 00:08:21.613 Error Log 00:08:21.613 ========= 00:08:21.613 00:08:21.613 Arbitration 00:08:21.613 =========== 00:08:21.613 Arbitration Burst: no limit 00:08:21.613 00:08:21.613 Power Management 00:08:21.613 ================ 00:08:21.613 Number of Power States: 1 00:08:21.613 Current Power State: Power State #0 00:08:21.613 Power State #0: 00:08:21.613 Max Power: 25.00 W 00:08:21.613 Non-Operational State: Operational 00:08:21.613 Entry Latency: 16 microseconds 00:08:21.614 Exit Latency: 4 microseconds 00:08:21.614 Relative Read Throughput: 0 00:08:21.614 Relative Read Latency: 0 00:08:21.614 Relative Write Throughput: 0 00:08:21.614 Relative Write Latency: 0 00:08:21.614 Idle Power: Not Reported 00:08:21.614 Active Power: Not Reported 00:08:21.614 Non-Operational Permissive Mode: Not Supported 00:08:21.614 00:08:21.614 Health Information 00:08:21.614 ================== 00:08:21.614 Critical Warnings: 00:08:21.614 Available Spare Space: OK 00:08:21.614 Temperature: OK 00:08:21.614 Device Reliability: OK 00:08:21.614 Read Only: No 00:08:21.614 Volatile Memory Backup: OK 00:08:21.614 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.614 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.614 Available Spare: 0% 00:08:21.614 Available Spare Threshold: 0% 00:08:21.614 Life Percentage Used: 0% 00:08:21.614 Data Units Read: 663 00:08:21.614 Data Units Written: 591 00:08:21.614 Host Read Commands: 35436 00:08:21.614 Host Write Commands: 35222 00:08:21.614 Controller Busy Time: 0 minutes 00:08:21.614 Power Cycles: 0 00:08:21.614 Power On Hours: 0 hours 00:08:21.614 Unsafe Shutdowns: 0 00:08:21.614 Unrecoverable Media Errors: 0 00:08:21.614 Lifetime Error Log Entries: 0 00:08:21.614 Warning Temperature Time: 0 minutes 00:08:21.614 Critical Temperature Time: 0 minutes 00:08:21.614 00:08:21.614 Number of Queues 00:08:21.614 ================ 00:08:21.614 Number of I/O Submission Queues: 64 00:08:21.614 Number of I/O Completion Queues: 64 00:08:21.614 00:08:21.614 ZNS Specific Controller Data 00:08:21.614 ============================ 00:08:21.614 Zone Append Size Limit: 0 00:08:21.614 00:08:21.614 00:08:21.614 Active Namespaces 00:08:21.614 ================= 00:08:21.614 Namespace ID:1 00:08:21.614 Error Recovery Timeout: Unlimited 00:08:21.614 Command Set Identifier: NVM (00h) 00:08:21.614 Deallocate: Supported 00:08:21.614 Deallocated/Unwritten Error: Supported 00:08:21.614 Deallocated Read Value: All 0x00 00:08:21.614 Deallocate in Write Zeroes: Not Supported 00:08:21.614 Deallocated Guard Field: 0xFFFF 00:08:21.614 Flush: Supported 00:08:21.614 Reservation: Not Supported 00:08:21.614 Metadata Transferred as: Separate Metadata Buffer 00:08:21.614 Namespace Sharing Capabilities: Private 00:08:21.614 Size (in LBAs): 1548666 (5GiB) 00:08:21.614 Capacity (in LBAs): 1548666 (5GiB) 00:08:21.614 Utilization (in LBAs): 1548666 (5GiB) 00:08:21.614 Thin Provisioning: Not Supported 00:08:21.614 Per-NS Atomic Units: No 00:08:21.614 Maximum Single Source Range Length: 128 00:08:21.614 Maximum Copy Length: 128 00:08:21.614 Maximum Source Range Count: 128 00:08:21.614 NGUID/EUI64 Never Reused: No 00:08:21.614 Namespace Write Protected: No 00:08:21.614 Number of LBA Formats: 8 00:08:21.614 Current LBA Format: LBA Format #07 00:08:21.614 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.614 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.614 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.614 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.614 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.614 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.614 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.614 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.614 00:08:21.614 NVM Specific Namespace Data 00:08:21.614 =========================== 00:08:21.614 Logical Block Storage Tag Mask: 0 00:08:21.614 Protection Information Capabilities: 00:08:21.614 16b Guard Protection Information Storage Tag Support: No 00:08:21.614 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.614 Storage Tag Check Read Support: No 00:08:21.614 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.614 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.614 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.614 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.614 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.614 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.614 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.614 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.614 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.614 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:21.874 ===================================================== 00:08:21.874 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:21.874 ===================================================== 00:08:21.874 Controller Capabilities/Features 00:08:21.874 ================================ 00:08:21.874 Vendor ID: 1b36 00:08:21.874 Subsystem Vendor ID: 1af4 00:08:21.874 Serial Number: 12341 00:08:21.874 Model Number: QEMU NVMe Ctrl 00:08:21.874 Firmware Version: 8.0.0 00:08:21.874 Recommended Arb Burst: 6 00:08:21.874 IEEE OUI Identifier: 00 54 52 00:08:21.874 Multi-path I/O 00:08:21.874 May have multiple subsystem ports: No 00:08:21.874 May have multiple controllers: No 00:08:21.874 Associated with SR-IOV VF: No 00:08:21.874 Max Data Transfer Size: 524288 00:08:21.874 Max Number of Namespaces: 256 00:08:21.874 Max Number of I/O Queues: 64 00:08:21.874 NVMe Specification Version (VS): 1.4 00:08:21.874 NVMe Specification Version (Identify): 1.4 00:08:21.874 Maximum Queue Entries: 2048 00:08:21.874 Contiguous Queues Required: Yes 00:08:21.874 Arbitration Mechanisms Supported 00:08:21.874 Weighted Round Robin: Not Supported 00:08:21.874 Vendor Specific: Not Supported 00:08:21.874 Reset Timeout: 7500 ms 00:08:21.874 Doorbell Stride: 4 bytes 00:08:21.874 NVM Subsystem Reset: Not Supported 00:08:21.874 Command Sets Supported 00:08:21.874 NVM Command Set: Supported 00:08:21.874 Boot Partition: Not Supported 00:08:21.874 Memory Page Size Minimum: 4096 bytes 00:08:21.874 Memory Page Size Maximum: 65536 bytes 00:08:21.874 Persistent Memory Region: Not Supported 00:08:21.874 Optional Asynchronous Events Supported 00:08:21.874 Namespace Attribute Notices: Supported 00:08:21.874 Firmware Activation Notices: Not Supported 00:08:21.874 ANA Change Notices: Not Supported 00:08:21.874 PLE Aggregate Log Change Notices: Not Supported 00:08:21.874 LBA Status Info Alert Notices: Not Supported 00:08:21.874 EGE Aggregate Log Change Notices: Not Supported 00:08:21.874 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.874 Zone Descriptor Change Notices: Not Supported 00:08:21.874 Discovery Log Change Notices: Not Supported 00:08:21.874 Controller Attributes 00:08:21.874 128-bit Host Identifier: Not Supported 00:08:21.874 Non-Operational Permissive Mode: Not Supported 00:08:21.874 NVM Sets: Not Supported 00:08:21.874 Read Recovery Levels: Not Supported 00:08:21.874 Endurance Groups: Not Supported 00:08:21.874 Predictable Latency Mode: Not Supported 00:08:21.874 Traffic Based Keep ALive: Not Supported 00:08:21.874 Namespace Granularity: Not Supported 00:08:21.874 SQ Associations: Not Supported 00:08:21.874 UUID List: Not Supported 00:08:21.874 Multi-Domain Subsystem: Not Supported 00:08:21.874 Fixed Capacity Management: Not Supported 00:08:21.874 Variable Capacity Management: Not Supported 00:08:21.874 Delete Endurance Group: Not Supported 00:08:21.874 Delete NVM Set: Not Supported 00:08:21.874 Extended LBA Formats Supported: Supported 00:08:21.874 Flexible Data Placement Supported: Not Supported 00:08:21.874 00:08:21.874 Controller Memory Buffer Support 00:08:21.874 ================================ 00:08:21.874 Supported: No 00:08:21.874 00:08:21.874 Persistent Memory Region Support 00:08:21.874 ================================ 00:08:21.874 Supported: No 00:08:21.874 00:08:21.874 Admin Command Set Attributes 00:08:21.874 ============================ 00:08:21.874 Security Send/Receive: Not Supported 00:08:21.874 Format NVM: Supported 00:08:21.874 Firmware Activate/Download: Not Supported 00:08:21.874 Namespace Management: Supported 00:08:21.874 Device Self-Test: Not Supported 00:08:21.874 Directives: Supported 00:08:21.874 NVMe-MI: Not Supported 00:08:21.874 Virtualization Management: Not Supported 00:08:21.874 Doorbell Buffer Config: Supported 00:08:21.874 Get LBA Status Capability: Not Supported 00:08:21.874 Command & Feature Lockdown Capability: Not Supported 00:08:21.874 Abort Command Limit: 4 00:08:21.874 Async Event Request Limit: 4 00:08:21.874 Number of Firmware Slots: N/A 00:08:21.874 Firmware Slot 1 Read-Only: N/A 00:08:21.874 Firmware Activation Without Reset: N/A 00:08:21.874 Multiple Update Detection Support: N/A 00:08:21.874 Firmware Update Granularity: No Information Provided 00:08:21.874 Per-Namespace SMART Log: Yes 00:08:21.874 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.874 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:21.874 Command Effects Log Page: Supported 00:08:21.874 Get Log Page Extended Data: Supported 00:08:21.874 Telemetry Log Pages: Not Supported 00:08:21.874 Persistent Event Log Pages: Not Supported 00:08:21.874 Supported Log Pages Log Page: May Support 00:08:21.874 Commands Supported & Effects Log Page: Not Supported 00:08:21.874 Feature Identifiers & Effects Log Page:May Support 00:08:21.874 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.875 Data Area 4 for Telemetry Log: Not Supported 00:08:21.875 Error Log Page Entries Supported: 1 00:08:21.875 Keep Alive: Not Supported 00:08:21.875 00:08:21.875 NVM Command Set Attributes 00:08:21.875 ========================== 00:08:21.875 Submission Queue Entry Size 00:08:21.875 Max: 64 00:08:21.875 Min: 64 00:08:21.875 Completion Queue Entry Size 00:08:21.875 Max: 16 00:08:21.875 Min: 16 00:08:21.875 Number of Namespaces: 256 00:08:21.875 Compare Command: Supported 00:08:21.875 Write Uncorrectable Command: Not Supported 00:08:21.875 Dataset Management Command: Supported 00:08:21.875 Write Zeroes Command: Supported 00:08:21.875 Set Features Save Field: Supported 00:08:21.875 Reservations: Not Supported 00:08:21.875 Timestamp: Supported 00:08:21.875 Copy: Supported 00:08:21.875 Volatile Write Cache: Present 00:08:21.875 Atomic Write Unit (Normal): 1 00:08:21.875 Atomic Write Unit (PFail): 1 00:08:21.875 Atomic Compare & Write Unit: 1 00:08:21.875 Fused Compare & Write: Not Supported 00:08:21.875 Scatter-Gather List 00:08:21.875 SGL Command Set: Supported 00:08:21.875 SGL Keyed: Not Supported 00:08:21.875 SGL Bit Bucket Descriptor: Not Supported 00:08:21.875 SGL Metadata Pointer: Not Supported 00:08:21.875 Oversized SGL: Not Supported 00:08:21.875 SGL Metadata Address: Not Supported 00:08:21.875 SGL Offset: Not Supported 00:08:21.875 Transport SGL Data Block: Not Supported 00:08:21.875 Replay Protected Memory Block: Not Supported 00:08:21.875 00:08:21.875 Firmware Slot Information 00:08:21.875 ========================= 00:08:21.875 Active slot: 1 00:08:21.875 Slot 1 Firmware Revision: 1.0 00:08:21.875 00:08:21.875 00:08:21.875 Commands Supported and Effects 00:08:21.875 ============================== 00:08:21.875 Admin Commands 00:08:21.875 -------------- 00:08:21.875 Delete I/O Submission Queue (00h): Supported 00:08:21.875 Create I/O Submission Queue (01h): Supported 00:08:21.875 Get Log Page (02h): Supported 00:08:21.875 Delete I/O Completion Queue (04h): Supported 00:08:21.875 Create I/O Completion Queue (05h): Supported 00:08:21.875 Identify (06h): Supported 00:08:21.875 Abort (08h): Supported 00:08:21.875 Set Features (09h): Supported 00:08:21.875 Get Features (0Ah): Supported 00:08:21.875 Asynchronous Event Request (0Ch): Supported 00:08:21.875 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.875 Directive Send (19h): Supported 00:08:21.875 Directive Receive (1Ah): Supported 00:08:21.875 Virtualization Management (1Ch): Supported 00:08:21.875 Doorbell Buffer Config (7Ch): Supported 00:08:21.875 Format NVM (80h): Supported LBA-Change 00:08:21.875 I/O Commands 00:08:21.875 ------------ 00:08:21.875 Flush (00h): Supported LBA-Change 00:08:21.875 Write (01h): Supported LBA-Change 00:08:21.875 Read (02h): Supported 00:08:21.875 Compare (05h): Supported 00:08:21.875 Write Zeroes (08h): Supported LBA-Change 00:08:21.875 Dataset Management (09h): Supported LBA-Change 00:08:21.875 Unknown (0Ch): Supported 00:08:21.875 Unknown (12h): Supported 00:08:21.875 Copy (19h): Supported LBA-Change 00:08:21.875 Unknown (1Dh): Supported LBA-Change 00:08:21.875 00:08:21.875 Error Log 00:08:21.875 ========= 00:08:21.875 00:08:21.875 Arbitration 00:08:21.875 =========== 00:08:21.875 Arbitration Burst: no limit 00:08:21.875 00:08:21.875 Power Management 00:08:21.875 ================ 00:08:21.875 Number of Power States: 1 00:08:21.875 Current Power State: Power State #0 00:08:21.875 Power State #0: 00:08:21.875 Max Power: 25.00 W 00:08:21.875 Non-Operational State: Operational 00:08:21.875 Entry Latency: 16 microseconds 00:08:21.875 Exit Latency: 4 microseconds 00:08:21.875 Relative Read Throughput: 0 00:08:21.875 Relative Read Latency: 0 00:08:21.875 Relative Write Throughput: 0 00:08:21.875 Relative Write Latency: 0 00:08:21.875 Idle Power: Not Reported 00:08:21.875 Active Power: Not Reported 00:08:21.875 Non-Operational Permissive Mode: Not Supported 00:08:21.875 00:08:21.875 Health Information 00:08:21.875 ================== 00:08:21.875 Critical Warnings: 00:08:21.875 Available Spare Space: OK 00:08:21.875 Temperature: OK 00:08:21.875 Device Reliability: OK 00:08:21.875 Read Only: No 00:08:21.875 Volatile Memory Backup: OK 00:08:21.875 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.875 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.875 Available Spare: 0% 00:08:21.875 Available Spare Threshold: 0% 00:08:21.875 Life Percentage Used: 0% 00:08:21.875 Data Units Read: 1005 00:08:21.875 Data Units Written: 875 00:08:21.875 Host Read Commands: 53246 00:08:21.875 Host Write Commands: 52087 00:08:21.875 Controller Busy Time: 0 minutes 00:08:21.875 Power Cycles: 0 00:08:21.875 Power On Hours: 0 hours 00:08:21.875 Unsafe Shutdowns: 0 00:08:21.875 Unrecoverable Media Errors: 0 00:08:21.875 Lifetime Error Log Entries: 0 00:08:21.875 Warning Temperature Time: 0 minutes 00:08:21.875 Critical Temperature Time: 0 minutes 00:08:21.875 00:08:21.875 Number of Queues 00:08:21.875 ================ 00:08:21.875 Number of I/O Submission Queues: 64 00:08:21.875 Number of I/O Completion Queues: 64 00:08:21.875 00:08:21.875 ZNS Specific Controller Data 00:08:21.875 ============================ 00:08:21.875 Zone Append Size Limit: 0 00:08:21.875 00:08:21.875 00:08:21.875 Active Namespaces 00:08:21.875 ================= 00:08:21.875 Namespace ID:1 00:08:21.875 Error Recovery Timeout: Unlimited 00:08:21.875 Command Set Identifier: NVM (00h) 00:08:21.875 Deallocate: Supported 00:08:21.875 Deallocated/Unwritten Error: Supported 00:08:21.875 Deallocated Read Value: All 0x00 00:08:21.875 Deallocate in Write Zeroes: Not Supported 00:08:21.875 Deallocated Guard Field: 0xFFFF 00:08:21.875 Flush: Supported 00:08:21.875 Reservation: Not Supported 00:08:21.875 Namespace Sharing Capabilities: Private 00:08:21.875 Size (in LBAs): 1310720 (5GiB) 00:08:21.875 Capacity (in LBAs): 1310720 (5GiB) 00:08:21.875 Utilization (in LBAs): 1310720 (5GiB) 00:08:21.875 Thin Provisioning: Not Supported 00:08:21.875 Per-NS Atomic Units: No 00:08:21.875 Maximum Single Source Range Length: 128 00:08:21.875 Maximum Copy Length: 128 00:08:21.875 Maximum Source Range Count: 128 00:08:21.875 NGUID/EUI64 Never Reused: No 00:08:21.875 Namespace Write Protected: No 00:08:21.875 Number of LBA Formats: 8 00:08:21.875 Current LBA Format: LBA Format #04 00:08:21.875 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.875 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.875 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.875 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.875 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.875 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.875 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.875 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.875 00:08:21.875 NVM Specific Namespace Data 00:08:21.875 =========================== 00:08:21.875 Logical Block Storage Tag Mask: 0 00:08:21.875 Protection Information Capabilities: 00:08:21.875 16b Guard Protection Information Storage Tag Support: No 00:08:21.875 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.875 Storage Tag Check Read Support: No 00:08:21.875 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.875 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.875 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.875 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.875 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.875 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.875 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.875 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.875 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.875 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:21.875 ===================================================== 00:08:21.875 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:21.875 ===================================================== 00:08:21.875 Controller Capabilities/Features 00:08:21.875 ================================ 00:08:21.875 Vendor ID: 1b36 00:08:21.875 Subsystem Vendor ID: 1af4 00:08:21.875 Serial Number: 12342 00:08:21.875 Model Number: QEMU NVMe Ctrl 00:08:21.875 Firmware Version: 8.0.0 00:08:21.875 Recommended Arb Burst: 6 00:08:21.875 IEEE OUI Identifier: 00 54 52 00:08:21.876 Multi-path I/O 00:08:21.876 May have multiple subsystem ports: No 00:08:21.876 May have multiple controllers: No 00:08:21.876 Associated with SR-IOV VF: No 00:08:21.876 Max Data Transfer Size: 524288 00:08:21.876 Max Number of Namespaces: 256 00:08:21.876 Max Number of I/O Queues: 64 00:08:21.876 NVMe Specification Version (VS): 1.4 00:08:21.876 NVMe Specification Version (Identify): 1.4 00:08:21.876 Maximum Queue Entries: 2048 00:08:21.876 Contiguous Queues Required: Yes 00:08:21.876 Arbitration Mechanisms Supported 00:08:21.876 Weighted Round Robin: Not Supported 00:08:21.876 Vendor Specific: Not Supported 00:08:21.876 Reset Timeout: 7500 ms 00:08:21.876 Doorbell Stride: 4 bytes 00:08:21.876 NVM Subsystem Reset: Not Supported 00:08:21.876 Command Sets Supported 00:08:21.876 NVM Command Set: Supported 00:08:21.876 Boot Partition: Not Supported 00:08:21.876 Memory Page Size Minimum: 4096 bytes 00:08:21.876 Memory Page Size Maximum: 65536 bytes 00:08:21.876 Persistent Memory Region: Not Supported 00:08:21.876 Optional Asynchronous Events Supported 00:08:21.876 Namespace Attribute Notices: Supported 00:08:21.876 Firmware Activation Notices: Not Supported 00:08:21.876 ANA Change Notices: Not Supported 00:08:21.876 PLE Aggregate Log Change Notices: Not Supported 00:08:21.876 LBA Status Info Alert Notices: Not Supported 00:08:21.876 EGE Aggregate Log Change Notices: Not Supported 00:08:21.876 Normal NVM Subsystem Shutdown event: Not Supported 00:08:21.876 Zone Descriptor Change Notices: Not Supported 00:08:21.876 Discovery Log Change Notices: Not Supported 00:08:21.876 Controller Attributes 00:08:21.876 128-bit Host Identifier: Not Supported 00:08:21.876 Non-Operational Permissive Mode: Not Supported 00:08:21.876 NVM Sets: Not Supported 00:08:21.876 Read Recovery Levels: Not Supported 00:08:21.876 Endurance Groups: Not Supported 00:08:21.876 Predictable Latency Mode: Not Supported 00:08:21.876 Traffic Based Keep ALive: Not Supported 00:08:21.876 Namespace Granularity: Not Supported 00:08:21.876 SQ Associations: Not Supported 00:08:21.876 UUID List: Not Supported 00:08:21.876 Multi-Domain Subsystem: Not Supported 00:08:21.876 Fixed Capacity Management: Not Supported 00:08:21.876 Variable Capacity Management: Not Supported 00:08:21.876 Delete Endurance Group: Not Supported 00:08:21.876 Delete NVM Set: Not Supported 00:08:21.876 Extended LBA Formats Supported: Supported 00:08:21.876 Flexible Data Placement Supported: Not Supported 00:08:21.876 00:08:21.876 Controller Memory Buffer Support 00:08:21.876 ================================ 00:08:21.876 Supported: No 00:08:21.876 00:08:21.876 Persistent Memory Region Support 00:08:21.876 ================================ 00:08:21.876 Supported: No 00:08:21.876 00:08:21.876 Admin Command Set Attributes 00:08:21.876 ============================ 00:08:21.876 Security Send/Receive: Not Supported 00:08:21.876 Format NVM: Supported 00:08:21.876 Firmware Activate/Download: Not Supported 00:08:21.876 Namespace Management: Supported 00:08:21.876 Device Self-Test: Not Supported 00:08:21.876 Directives: Supported 00:08:21.876 NVMe-MI: Not Supported 00:08:21.876 Virtualization Management: Not Supported 00:08:21.876 Doorbell Buffer Config: Supported 00:08:21.876 Get LBA Status Capability: Not Supported 00:08:21.876 Command & Feature Lockdown Capability: Not Supported 00:08:21.876 Abort Command Limit: 4 00:08:21.876 Async Event Request Limit: 4 00:08:21.876 Number of Firmware Slots: N/A 00:08:21.876 Firmware Slot 1 Read-Only: N/A 00:08:21.876 Firmware Activation Without Reset: N/A 00:08:21.876 Multiple Update Detection Support: N/A 00:08:21.876 Firmware Update Granularity: No Information Provided 00:08:21.876 Per-Namespace SMART Log: Yes 00:08:21.876 Asymmetric Namespace Access Log Page: Not Supported 00:08:21.876 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:21.876 Command Effects Log Page: Supported 00:08:21.876 Get Log Page Extended Data: Supported 00:08:21.876 Telemetry Log Pages: Not Supported 00:08:21.876 Persistent Event Log Pages: Not Supported 00:08:21.876 Supported Log Pages Log Page: May Support 00:08:21.876 Commands Supported & Effects Log Page: Not Supported 00:08:21.876 Feature Identifiers & Effects Log Page:May Support 00:08:21.876 NVMe-MI Commands & Effects Log Page: May Support 00:08:21.876 Data Area 4 for Telemetry Log: Not Supported 00:08:21.876 Error Log Page Entries Supported: 1 00:08:21.876 Keep Alive: Not Supported 00:08:21.876 00:08:21.876 NVM Command Set Attributes 00:08:21.876 ========================== 00:08:21.876 Submission Queue Entry Size 00:08:21.876 Max: 64 00:08:21.876 Min: 64 00:08:21.876 Completion Queue Entry Size 00:08:21.876 Max: 16 00:08:21.876 Min: 16 00:08:21.876 Number of Namespaces: 256 00:08:21.876 Compare Command: Supported 00:08:21.876 Write Uncorrectable Command: Not Supported 00:08:21.876 Dataset Management Command: Supported 00:08:21.876 Write Zeroes Command: Supported 00:08:21.876 Set Features Save Field: Supported 00:08:21.876 Reservations: Not Supported 00:08:21.876 Timestamp: Supported 00:08:21.876 Copy: Supported 00:08:21.876 Volatile Write Cache: Present 00:08:21.876 Atomic Write Unit (Normal): 1 00:08:21.876 Atomic Write Unit (PFail): 1 00:08:21.876 Atomic Compare & Write Unit: 1 00:08:21.876 Fused Compare & Write: Not Supported 00:08:21.876 Scatter-Gather List 00:08:21.876 SGL Command Set: Supported 00:08:21.876 SGL Keyed: Not Supported 00:08:21.876 SGL Bit Bucket Descriptor: Not Supported 00:08:21.876 SGL Metadata Pointer: Not Supported 00:08:21.876 Oversized SGL: Not Supported 00:08:21.876 SGL Metadata Address: Not Supported 00:08:21.876 SGL Offset: Not Supported 00:08:21.876 Transport SGL Data Block: Not Supported 00:08:21.876 Replay Protected Memory Block: Not Supported 00:08:21.876 00:08:21.876 Firmware Slot Information 00:08:21.876 ========================= 00:08:21.876 Active slot: 1 00:08:21.876 Slot 1 Firmware Revision: 1.0 00:08:21.876 00:08:21.876 00:08:21.876 Commands Supported and Effects 00:08:21.876 ============================== 00:08:21.876 Admin Commands 00:08:21.876 -------------- 00:08:21.876 Delete I/O Submission Queue (00h): Supported 00:08:21.876 Create I/O Submission Queue (01h): Supported 00:08:21.876 Get Log Page (02h): Supported 00:08:21.876 Delete I/O Completion Queue (04h): Supported 00:08:21.876 Create I/O Completion Queue (05h): Supported 00:08:21.876 Identify (06h): Supported 00:08:21.876 Abort (08h): Supported 00:08:21.876 Set Features (09h): Supported 00:08:21.876 Get Features (0Ah): Supported 00:08:21.876 Asynchronous Event Request (0Ch): Supported 00:08:21.876 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:21.876 Directive Send (19h): Supported 00:08:21.876 Directive Receive (1Ah): Supported 00:08:21.876 Virtualization Management (1Ch): Supported 00:08:21.876 Doorbell Buffer Config (7Ch): Supported 00:08:21.876 Format NVM (80h): Supported LBA-Change 00:08:21.876 I/O Commands 00:08:21.876 ------------ 00:08:21.876 Flush (00h): Supported LBA-Change 00:08:21.876 Write (01h): Supported LBA-Change 00:08:21.876 Read (02h): Supported 00:08:21.876 Compare (05h): Supported 00:08:21.876 Write Zeroes (08h): Supported LBA-Change 00:08:21.876 Dataset Management (09h): Supported LBA-Change 00:08:21.876 Unknown (0Ch): Supported 00:08:21.876 Unknown (12h): Supported 00:08:21.876 Copy (19h): Supported LBA-Change 00:08:21.876 Unknown (1Dh): Supported LBA-Change 00:08:21.876 00:08:21.876 Error Log 00:08:21.876 ========= 00:08:21.876 00:08:21.876 Arbitration 00:08:21.876 =========== 00:08:21.876 Arbitration Burst: no limit 00:08:21.876 00:08:21.876 Power Management 00:08:21.876 ================ 00:08:21.876 Number of Power States: 1 00:08:21.876 Current Power State: Power State #0 00:08:21.876 Power State #0: 00:08:21.876 Max Power: 25.00 W 00:08:21.876 Non-Operational State: Operational 00:08:21.876 Entry Latency: 16 microseconds 00:08:21.876 Exit Latency: 4 microseconds 00:08:21.876 Relative Read Throughput: 0 00:08:21.876 Relative Read Latency: 0 00:08:21.876 Relative Write Throughput: 0 00:08:21.876 Relative Write Latency: 0 00:08:21.876 Idle Power: Not Reported 00:08:21.876 Active Power: Not Reported 00:08:21.876 Non-Operational Permissive Mode: Not Supported 00:08:21.876 00:08:21.876 Health Information 00:08:21.876 ================== 00:08:21.876 Critical Warnings: 00:08:21.876 Available Spare Space: OK 00:08:21.876 Temperature: OK 00:08:21.876 Device Reliability: OK 00:08:21.876 Read Only: No 00:08:21.876 Volatile Memory Backup: OK 00:08:21.877 Current Temperature: 323 Kelvin (50 Celsius) 00:08:21.877 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:21.877 Available Spare: 0% 00:08:21.877 Available Spare Threshold: 0% 00:08:21.877 Life Percentage Used: 0% 00:08:21.877 Data Units Read: 2267 00:08:21.877 Data Units Written: 2054 00:08:21.877 Host Read Commands: 109150 00:08:21.877 Host Write Commands: 107419 00:08:21.877 Controller Busy Time: 0 minutes 00:08:21.877 Power Cycles: 0 00:08:21.877 Power On Hours: 0 hours 00:08:21.877 Unsafe Shutdowns: 0 00:08:21.877 Unrecoverable Media Errors: 0 00:08:21.877 Lifetime Error Log Entries: 0 00:08:21.877 Warning Temperature Time: 0 minutes 00:08:21.877 Critical Temperature Time: 0 minutes 00:08:21.877 00:08:21.877 Number of Queues 00:08:21.877 ================ 00:08:21.877 Number of I/O Submission Queues: 64 00:08:21.877 Number of I/O Completion Queues: 64 00:08:21.877 00:08:21.877 ZNS Specific Controller Data 00:08:21.877 ============================ 00:08:21.877 Zone Append Size Limit: 0 00:08:21.877 00:08:21.877 00:08:21.877 Active Namespaces 00:08:21.877 ================= 00:08:21.877 Namespace ID:1 00:08:21.877 Error Recovery Timeout: Unlimited 00:08:21.877 Command Set Identifier: NVM (00h) 00:08:21.877 Deallocate: Supported 00:08:21.877 Deallocated/Unwritten Error: Supported 00:08:21.877 Deallocated Read Value: All 0x00 00:08:21.877 Deallocate in Write Zeroes: Not Supported 00:08:21.877 Deallocated Guard Field: 0xFFFF 00:08:21.877 Flush: Supported 00:08:21.877 Reservation: Not Supported 00:08:21.877 Namespace Sharing Capabilities: Private 00:08:21.877 Size (in LBAs): 1048576 (4GiB) 00:08:21.877 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.877 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.877 Thin Provisioning: Not Supported 00:08:21.877 Per-NS Atomic Units: No 00:08:21.877 Maximum Single Source Range Length: 128 00:08:21.877 Maximum Copy Length: 128 00:08:21.877 Maximum Source Range Count: 128 00:08:21.877 NGUID/EUI64 Never Reused: No 00:08:21.877 Namespace Write Protected: No 00:08:21.877 Number of LBA Formats: 8 00:08:21.877 Current LBA Format: LBA Format #04 00:08:21.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.877 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.877 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.877 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.877 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.877 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.877 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.877 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.877 00:08:21.877 NVM Specific Namespace Data 00:08:21.877 =========================== 00:08:21.877 Logical Block Storage Tag Mask: 0 00:08:21.877 Protection Information Capabilities: 00:08:21.877 16b Guard Protection Information Storage Tag Support: No 00:08:21.877 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.877 Storage Tag Check Read Support: No 00:08:21.877 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Namespace ID:2 00:08:21.877 Error Recovery Timeout: Unlimited 00:08:21.877 Command Set Identifier: NVM (00h) 00:08:21.877 Deallocate: Supported 00:08:21.877 Deallocated/Unwritten Error: Supported 00:08:21.877 Deallocated Read Value: All 0x00 00:08:21.877 Deallocate in Write Zeroes: Not Supported 00:08:21.877 Deallocated Guard Field: 0xFFFF 00:08:21.877 Flush: Supported 00:08:21.877 Reservation: Not Supported 00:08:21.877 Namespace Sharing Capabilities: Private 00:08:21.877 Size (in LBAs): 1048576 (4GiB) 00:08:21.877 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.877 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.877 Thin Provisioning: Not Supported 00:08:21.877 Per-NS Atomic Units: No 00:08:21.877 Maximum Single Source Range Length: 128 00:08:21.877 Maximum Copy Length: 128 00:08:21.877 Maximum Source Range Count: 128 00:08:21.877 NGUID/EUI64 Never Reused: No 00:08:21.877 Namespace Write Protected: No 00:08:21.877 Number of LBA Formats: 8 00:08:21.877 Current LBA Format: LBA Format #04 00:08:21.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.877 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.877 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.877 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.877 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.877 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.877 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.877 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.877 00:08:21.877 NVM Specific Namespace Data 00:08:21.877 =========================== 00:08:21.877 Logical Block Storage Tag Mask: 0 00:08:21.877 Protection Information Capabilities: 00:08:21.877 16b Guard Protection Information Storage Tag Support: No 00:08:21.877 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.877 Storage Tag Check Read Support: No 00:08:21.877 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Namespace ID:3 00:08:21.877 Error Recovery Timeout: Unlimited 00:08:21.877 Command Set Identifier: NVM (00h) 00:08:21.877 Deallocate: Supported 00:08:21.877 Deallocated/Unwritten Error: Supported 00:08:21.877 Deallocated Read Value: All 0x00 00:08:21.877 Deallocate in Write Zeroes: Not Supported 00:08:21.877 Deallocated Guard Field: 0xFFFF 00:08:21.877 Flush: Supported 00:08:21.877 Reservation: Not Supported 00:08:21.877 Namespace Sharing Capabilities: Private 00:08:21.877 Size (in LBAs): 1048576 (4GiB) 00:08:21.877 Capacity (in LBAs): 1048576 (4GiB) 00:08:21.877 Utilization (in LBAs): 1048576 (4GiB) 00:08:21.877 Thin Provisioning: Not Supported 00:08:21.877 Per-NS Atomic Units: No 00:08:21.877 Maximum Single Source Range Length: 128 00:08:21.877 Maximum Copy Length: 128 00:08:21.877 Maximum Source Range Count: 128 00:08:21.877 NGUID/EUI64 Never Reused: No 00:08:21.877 Namespace Write Protected: No 00:08:21.877 Number of LBA Formats: 8 00:08:21.877 Current LBA Format: LBA Format #04 00:08:21.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:21.877 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:21.877 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:21.877 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:21.877 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:21.877 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:21.877 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:21.877 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:21.877 00:08:21.877 NVM Specific Namespace Data 00:08:21.877 =========================== 00:08:21.877 Logical Block Storage Tag Mask: 0 00:08:21.877 Protection Information Capabilities: 00:08:21.877 16b Guard Protection Information Storage Tag Support: No 00:08:21.877 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:21.877 Storage Tag Check Read Support: No 00:08:21.877 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:21.877 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:21.877 20:50:39 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:22.137 ===================================================== 00:08:22.137 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:22.137 ===================================================== 00:08:22.137 Controller Capabilities/Features 00:08:22.137 ================================ 00:08:22.137 Vendor ID: 1b36 00:08:22.137 Subsystem Vendor ID: 1af4 00:08:22.137 Serial Number: 12343 00:08:22.137 Model Number: QEMU NVMe Ctrl 00:08:22.137 Firmware Version: 8.0.0 00:08:22.137 Recommended Arb Burst: 6 00:08:22.137 IEEE OUI Identifier: 00 54 52 00:08:22.137 Multi-path I/O 00:08:22.137 May have multiple subsystem ports: No 00:08:22.137 May have multiple controllers: Yes 00:08:22.137 Associated with SR-IOV VF: No 00:08:22.137 Max Data Transfer Size: 524288 00:08:22.137 Max Number of Namespaces: 256 00:08:22.137 Max Number of I/O Queues: 64 00:08:22.137 NVMe Specification Version (VS): 1.4 00:08:22.137 NVMe Specification Version (Identify): 1.4 00:08:22.137 Maximum Queue Entries: 2048 00:08:22.137 Contiguous Queues Required: Yes 00:08:22.137 Arbitration Mechanisms Supported 00:08:22.137 Weighted Round Robin: Not Supported 00:08:22.137 Vendor Specific: Not Supported 00:08:22.137 Reset Timeout: 7500 ms 00:08:22.137 Doorbell Stride: 4 bytes 00:08:22.137 NVM Subsystem Reset: Not Supported 00:08:22.137 Command Sets Supported 00:08:22.137 NVM Command Set: Supported 00:08:22.137 Boot Partition: Not Supported 00:08:22.137 Memory Page Size Minimum: 4096 bytes 00:08:22.137 Memory Page Size Maximum: 65536 bytes 00:08:22.137 Persistent Memory Region: Not Supported 00:08:22.137 Optional Asynchronous Events Supported 00:08:22.137 Namespace Attribute Notices: Supported 00:08:22.137 Firmware Activation Notices: Not Supported 00:08:22.137 ANA Change Notices: Not Supported 00:08:22.137 PLE Aggregate Log Change Notices: Not Supported 00:08:22.137 LBA Status Info Alert Notices: Not Supported 00:08:22.137 EGE Aggregate Log Change Notices: Not Supported 00:08:22.137 Normal NVM Subsystem Shutdown event: Not Supported 00:08:22.137 Zone Descriptor Change Notices: Not Supported 00:08:22.137 Discovery Log Change Notices: Not Supported 00:08:22.137 Controller Attributes 00:08:22.137 128-bit Host Identifier: Not Supported 00:08:22.137 Non-Operational Permissive Mode: Not Supported 00:08:22.137 NVM Sets: Not Supported 00:08:22.137 Read Recovery Levels: Not Supported 00:08:22.137 Endurance Groups: Supported 00:08:22.137 Predictable Latency Mode: Not Supported 00:08:22.137 Traffic Based Keep ALive: Not Supported 00:08:22.137 Namespace Granularity: Not Supported 00:08:22.137 SQ Associations: Not Supported 00:08:22.137 UUID List: Not Supported 00:08:22.137 Multi-Domain Subsystem: Not Supported 00:08:22.137 Fixed Capacity Management: Not Supported 00:08:22.137 Variable Capacity Management: Not Supported 00:08:22.137 Delete Endurance Group: Not Supported 00:08:22.137 Delete NVM Set: Not Supported 00:08:22.137 Extended LBA Formats Supported: Supported 00:08:22.137 Flexible Data Placement Supported: Supported 00:08:22.137 00:08:22.137 Controller Memory Buffer Support 00:08:22.137 ================================ 00:08:22.137 Supported: No 00:08:22.137 00:08:22.137 Persistent Memory Region Support 00:08:22.137 ================================ 00:08:22.137 Supported: No 00:08:22.137 00:08:22.137 Admin Command Set Attributes 00:08:22.137 ============================ 00:08:22.137 Security Send/Receive: Not Supported 00:08:22.137 Format NVM: Supported 00:08:22.137 Firmware Activate/Download: Not Supported 00:08:22.137 Namespace Management: Supported 00:08:22.137 Device Self-Test: Not Supported 00:08:22.137 Directives: Supported 00:08:22.137 NVMe-MI: Not Supported 00:08:22.137 Virtualization Management: Not Supported 00:08:22.137 Doorbell Buffer Config: Supported 00:08:22.137 Get LBA Status Capability: Not Supported 00:08:22.137 Command & Feature Lockdown Capability: Not Supported 00:08:22.137 Abort Command Limit: 4 00:08:22.137 Async Event Request Limit: 4 00:08:22.137 Number of Firmware Slots: N/A 00:08:22.137 Firmware Slot 1 Read-Only: N/A 00:08:22.137 Firmware Activation Without Reset: N/A 00:08:22.137 Multiple Update Detection Support: N/A 00:08:22.137 Firmware Update Granularity: No Information Provided 00:08:22.137 Per-Namespace SMART Log: Yes 00:08:22.137 Asymmetric Namespace Access Log Page: Not Supported 00:08:22.137 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:22.137 Command Effects Log Page: Supported 00:08:22.137 Get Log Page Extended Data: Supported 00:08:22.137 Telemetry Log Pages: Not Supported 00:08:22.137 Persistent Event Log Pages: Not Supported 00:08:22.137 Supported Log Pages Log Page: May Support 00:08:22.137 Commands Supported & Effects Log Page: Not Supported 00:08:22.137 Feature Identifiers & Effects Log Page:May Support 00:08:22.137 NVMe-MI Commands & Effects Log Page: May Support 00:08:22.137 Data Area 4 for Telemetry Log: Not Supported 00:08:22.137 Error Log Page Entries Supported: 1 00:08:22.137 Keep Alive: Not Supported 00:08:22.137 00:08:22.137 NVM Command Set Attributes 00:08:22.137 ========================== 00:08:22.137 Submission Queue Entry Size 00:08:22.137 Max: 64 00:08:22.137 Min: 64 00:08:22.137 Completion Queue Entry Size 00:08:22.137 Max: 16 00:08:22.137 Min: 16 00:08:22.137 Number of Namespaces: 256 00:08:22.137 Compare Command: Supported 00:08:22.137 Write Uncorrectable Command: Not Supported 00:08:22.137 Dataset Management Command: Supported 00:08:22.137 Write Zeroes Command: Supported 00:08:22.137 Set Features Save Field: Supported 00:08:22.137 Reservations: Not Supported 00:08:22.137 Timestamp: Supported 00:08:22.137 Copy: Supported 00:08:22.137 Volatile Write Cache: Present 00:08:22.137 Atomic Write Unit (Normal): 1 00:08:22.137 Atomic Write Unit (PFail): 1 00:08:22.137 Atomic Compare & Write Unit: 1 00:08:22.137 Fused Compare & Write: Not Supported 00:08:22.137 Scatter-Gather List 00:08:22.137 SGL Command Set: Supported 00:08:22.137 SGL Keyed: Not Supported 00:08:22.137 SGL Bit Bucket Descriptor: Not Supported 00:08:22.137 SGL Metadata Pointer: Not Supported 00:08:22.137 Oversized SGL: Not Supported 00:08:22.137 SGL Metadata Address: Not Supported 00:08:22.137 SGL Offset: Not Supported 00:08:22.137 Transport SGL Data Block: Not Supported 00:08:22.137 Replay Protected Memory Block: Not Supported 00:08:22.138 00:08:22.138 Firmware Slot Information 00:08:22.138 ========================= 00:08:22.138 Active slot: 1 00:08:22.138 Slot 1 Firmware Revision: 1.0 00:08:22.138 00:08:22.138 00:08:22.138 Commands Supported and Effects 00:08:22.138 ============================== 00:08:22.138 Admin Commands 00:08:22.138 -------------- 00:08:22.138 Delete I/O Submission Queue (00h): Supported 00:08:22.138 Create I/O Submission Queue (01h): Supported 00:08:22.138 Get Log Page (02h): Supported 00:08:22.138 Delete I/O Completion Queue (04h): Supported 00:08:22.138 Create I/O Completion Queue (05h): Supported 00:08:22.138 Identify (06h): Supported 00:08:22.138 Abort (08h): Supported 00:08:22.138 Set Features (09h): Supported 00:08:22.138 Get Features (0Ah): Supported 00:08:22.138 Asynchronous Event Request (0Ch): Supported 00:08:22.138 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:22.138 Directive Send (19h): Supported 00:08:22.138 Directive Receive (1Ah): Supported 00:08:22.138 Virtualization Management (1Ch): Supported 00:08:22.138 Doorbell Buffer Config (7Ch): Supported 00:08:22.138 Format NVM (80h): Supported LBA-Change 00:08:22.138 I/O Commands 00:08:22.138 ------------ 00:08:22.138 Flush (00h): Supported LBA-Change 00:08:22.138 Write (01h): Supported LBA-Change 00:08:22.138 Read (02h): Supported 00:08:22.138 Compare (05h): Supported 00:08:22.138 Write Zeroes (08h): Supported LBA-Change 00:08:22.138 Dataset Management (09h): Supported LBA-Change 00:08:22.138 Unknown (0Ch): Supported 00:08:22.138 Unknown (12h): Supported 00:08:22.138 Copy (19h): Supported LBA-Change 00:08:22.138 Unknown (1Dh): Supported LBA-Change 00:08:22.138 00:08:22.138 Error Log 00:08:22.138 ========= 00:08:22.138 00:08:22.138 Arbitration 00:08:22.138 =========== 00:08:22.138 Arbitration Burst: no limit 00:08:22.138 00:08:22.138 Power Management 00:08:22.138 ================ 00:08:22.138 Number of Power States: 1 00:08:22.138 Current Power State: Power State #0 00:08:22.138 Power State #0: 00:08:22.138 Max Power: 25.00 W 00:08:22.138 Non-Operational State: Operational 00:08:22.138 Entry Latency: 16 microseconds 00:08:22.138 Exit Latency: 4 microseconds 00:08:22.138 Relative Read Throughput: 0 00:08:22.138 Relative Read Latency: 0 00:08:22.138 Relative Write Throughput: 0 00:08:22.138 Relative Write Latency: 0 00:08:22.138 Idle Power: Not Reported 00:08:22.138 Active Power: Not Reported 00:08:22.138 Non-Operational Permissive Mode: Not Supported 00:08:22.138 00:08:22.138 Health Information 00:08:22.138 ================== 00:08:22.138 Critical Warnings: 00:08:22.138 Available Spare Space: OK 00:08:22.138 Temperature: OK 00:08:22.138 Device Reliability: OK 00:08:22.138 Read Only: No 00:08:22.138 Volatile Memory Backup: OK 00:08:22.138 Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.138 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:22.138 Available Spare: 0% 00:08:22.138 Available Spare Threshold: 0% 00:08:22.138 Life Percentage Used: 0% 00:08:22.138 Data Units Read: 979 00:08:22.138 Data Units Written: 908 00:08:22.138 Host Read Commands: 38208 00:08:22.138 Host Write Commands: 37631 00:08:22.138 Controller Busy Time: 0 minutes 00:08:22.138 Power Cycles: 0 00:08:22.138 Power On Hours: 0 hours 00:08:22.138 Unsafe Shutdowns: 0 00:08:22.138 Unrecoverable Media Errors: 0 00:08:22.138 Lifetime Error Log Entries: 0 00:08:22.138 Warning Temperature Time: 0 minutes 00:08:22.138 Critical Temperature Time: 0 minutes 00:08:22.138 00:08:22.138 Number of Queues 00:08:22.138 ================ 00:08:22.138 Number of I/O Submission Queues: 64 00:08:22.138 Number of I/O Completion Queues: 64 00:08:22.138 00:08:22.138 ZNS Specific Controller Data 00:08:22.138 ============================ 00:08:22.138 Zone Append Size Limit: 0 00:08:22.138 00:08:22.138 00:08:22.138 Active Namespaces 00:08:22.138 ================= 00:08:22.138 Namespace ID:1 00:08:22.138 Error Recovery Timeout: Unlimited 00:08:22.138 Command Set Identifier: NVM (00h) 00:08:22.138 Deallocate: Supported 00:08:22.138 Deallocated/Unwritten Error: Supported 00:08:22.138 Deallocated Read Value: All 0x00 00:08:22.138 Deallocate in Write Zeroes: Not Supported 00:08:22.138 Deallocated Guard Field: 0xFFFF 00:08:22.138 Flush: Supported 00:08:22.138 Reservation: Not Supported 00:08:22.138 Namespace Sharing Capabilities: Multiple Controllers 00:08:22.138 Size (in LBAs): 262144 (1GiB) 00:08:22.138 Capacity (in LBAs): 262144 (1GiB) 00:08:22.138 Utilization (in LBAs): 262144 (1GiB) 00:08:22.138 Thin Provisioning: Not Supported 00:08:22.138 Per-NS Atomic Units: No 00:08:22.138 Maximum Single Source Range Length: 128 00:08:22.138 Maximum Copy Length: 128 00:08:22.138 Maximum Source Range Count: 128 00:08:22.138 NGUID/EUI64 Never Reused: No 00:08:22.138 Namespace Write Protected: No 00:08:22.138 Endurance group ID: 1 00:08:22.138 Number of LBA Formats: 8 00:08:22.138 Current LBA Format: LBA Format #04 00:08:22.138 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.138 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.138 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.138 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.138 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.138 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.138 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.138 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.138 00:08:22.138 Get Feature FDP: 00:08:22.138 ================ 00:08:22.138 Enabled: Yes 00:08:22.138 FDP configuration index: 0 00:08:22.138 00:08:22.138 FDP configurations log page 00:08:22.138 =========================== 00:08:22.138 Number of FDP configurations: 1 00:08:22.138 Version: 0 00:08:22.138 Size: 112 00:08:22.138 FDP Configuration Descriptor: 0 00:08:22.138 Descriptor Size: 96 00:08:22.138 Reclaim Group Identifier format: 2 00:08:22.138 FDP Volatile Write Cache: Not Present 00:08:22.138 FDP Configuration: Valid 00:08:22.138 Vendor Specific Size: 0 00:08:22.138 Number of Reclaim Groups: 2 00:08:22.138 Number of Recalim Unit Handles: 8 00:08:22.138 Max Placement Identifiers: 128 00:08:22.138 Number of Namespaces Suppprted: 256 00:08:22.138 Reclaim unit Nominal Size: 6000000 bytes 00:08:22.138 Estimated Reclaim Unit Time Limit: Not Reported 00:08:22.138 RUH Desc #000: RUH Type: Initially Isolated 00:08:22.138 RUH Desc #001: RUH Type: Initially Isolated 00:08:22.138 RUH Desc #002: RUH Type: Initially Isolated 00:08:22.138 RUH Desc #003: RUH Type: Initially Isolated 00:08:22.138 RUH Desc #004: RUH Type: Initially Isolated 00:08:22.138 RUH Desc #005: RUH Type: Initially Isolated 00:08:22.138 RUH Desc #006: RUH Type: Initially Isolated 00:08:22.138 RUH Desc #007: RUH Type: Initially Isolated 00:08:22.138 00:08:22.138 FDP reclaim unit handle usage log page 00:08:22.138 ====================================== 00:08:22.138 Number of Reclaim Unit Handles: 8 00:08:22.138 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:22.138 RUH Usage Desc #001: RUH Attributes: Unused 00:08:22.138 RUH Usage Desc #002: RUH Attributes: Unused 00:08:22.138 RUH Usage Desc #003: RUH Attributes: Unused 00:08:22.138 RUH Usage Desc #004: RUH Attributes: Unused 00:08:22.138 RUH Usage Desc #005: RUH Attributes: Unused 00:08:22.138 RUH Usage Desc #006: RUH Attributes: Unused 00:08:22.138 RUH Usage Desc #007: RUH Attributes: Unused 00:08:22.138 00:08:22.138 FDP statistics log page 00:08:22.138 ======================= 00:08:22.138 Host bytes with metadata written: 544251904 00:08:22.138 Media bytes with metadata written: 544329728 00:08:22.138 Media bytes erased: 0 00:08:22.138 00:08:22.138 FDP events log page 00:08:22.138 =================== 00:08:22.138 Number of FDP events: 0 00:08:22.138 00:08:22.138 NVM Specific Namespace Data 00:08:22.138 =========================== 00:08:22.138 Logical Block Storage Tag Mask: 0 00:08:22.138 Protection Information Capabilities: 00:08:22.138 16b Guard Protection Information Storage Tag Support: No 00:08:22.138 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.138 Storage Tag Check Read Support: No 00:08:22.138 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.138 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.138 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.138 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.138 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.138 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.138 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.138 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.138 00:08:22.139 real 0m1.048s 00:08:22.139 user 0m0.383s 00:08:22.139 sys 0m0.448s 00:08:22.139 20:50:40 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.139 ************************************ 00:08:22.139 END TEST nvme_identify 00:08:22.139 20:50:40 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:22.139 ************************************ 00:08:22.139 20:50:40 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:22.139 20:50:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.139 20:50:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.139 20:50:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.139 ************************************ 00:08:22.139 START TEST nvme_perf 00:08:22.139 ************************************ 00:08:22.139 20:50:40 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:22.139 20:50:40 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:23.518 Initializing NVMe Controllers 00:08:23.518 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:23.518 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:23.518 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.518 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:23.518 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:23.518 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:23.518 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:23.518 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:23.518 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:23.518 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:23.518 Initialization complete. Launching workers. 00:08:23.518 ======================================================== 00:08:23.519 Latency(us) 00:08:23.519 Device Information : IOPS MiB/s Average min max 00:08:23.519 PCIE (0000:00:10.0) NSID 1 from core 0: 16212.57 189.99 7897.30 4633.20 27966.44 00:08:23.519 PCIE (0000:00:11.0) NSID 1 from core 0: 16212.57 189.99 7890.75 4484.99 28067.69 00:08:23.519 PCIE (0000:00:13.0) NSID 1 from core 0: 16212.57 189.99 7883.10 4001.94 27720.71 00:08:23.519 PCIE (0000:00:12.0) NSID 1 from core 0: 16212.57 189.99 7875.23 3802.40 26952.56 00:08:23.519 PCIE (0000:00:12.0) NSID 2 from core 0: 16212.57 189.99 7867.36 3654.02 26159.05 00:08:23.519 PCIE (0000:00:12.0) NSID 3 from core 0: 16212.57 189.99 7859.53 3498.73 25405.97 00:08:23.519 ======================================================== 00:08:23.519 Total : 97275.40 1139.95 7878.88 3498.73 28067.69 00:08:23.519 00:08:23.519 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:23.519 ================================================================================= 00:08:23.519 1.00000% : 6074.683us 00:08:23.519 10.00000% : 6402.363us 00:08:23.519 25.00000% : 6704.837us 00:08:23.519 50.00000% : 7208.960us 00:08:23.519 75.00000% : 8721.329us 00:08:23.519 90.00000% : 9628.751us 00:08:23.519 95.00000% : 11393.182us 00:08:23.519 98.00000% : 13510.498us 00:08:23.519 99.00000% : 14216.271us 00:08:23.519 99.50000% : 17745.132us 00:08:23.519 99.90000% : 27424.295us 00:08:23.519 99.99000% : 28029.243us 00:08:23.519 99.99900% : 28029.243us 00:08:23.519 99.99990% : 28029.243us 00:08:23.519 99.99999% : 28029.243us 00:08:23.519 00:08:23.519 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:23.519 ================================================================================= 00:08:23.519 1.00000% : 6150.302us 00:08:23.519 10.00000% : 6452.775us 00:08:23.519 25.00000% : 6704.837us 00:08:23.519 50.00000% : 7158.548us 00:08:23.519 75.00000% : 8721.329us 00:08:23.519 90.00000% : 9578.338us 00:08:23.519 95.00000% : 11393.182us 00:08:23.519 98.00000% : 13308.849us 00:08:23.519 99.00000% : 14115.446us 00:08:23.519 99.50000% : 18047.606us 00:08:23.519 99.90000% : 27625.945us 00:08:23.519 99.99000% : 28230.892us 00:08:23.519 99.99900% : 28230.892us 00:08:23.519 99.99990% : 28230.892us 00:08:23.519 99.99999% : 28230.892us 00:08:23.519 00:08:23.519 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:23.519 ================================================================================= 00:08:23.519 1.00000% : 6150.302us 00:08:23.519 10.00000% : 6452.775us 00:08:23.519 25.00000% : 6704.837us 00:08:23.519 50.00000% : 7158.548us 00:08:23.519 75.00000% : 8771.742us 00:08:23.519 90.00000% : 9628.751us 00:08:23.519 95.00000% : 11191.532us 00:08:23.519 98.00000% : 13308.849us 00:08:23.519 99.00000% : 14014.622us 00:08:23.519 99.50000% : 18249.255us 00:08:23.519 99.90000% : 27625.945us 00:08:23.519 99.99000% : 27827.594us 00:08:23.519 99.99900% : 27827.594us 00:08:23.519 99.99990% : 27827.594us 00:08:23.519 99.99999% : 27827.594us 00:08:23.519 00:08:23.519 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:23.519 ================================================================================= 00:08:23.519 1.00000% : 6125.095us 00:08:23.519 10.00000% : 6452.775us 00:08:23.519 25.00000% : 6704.837us 00:08:23.519 50.00000% : 7158.548us 00:08:23.519 75.00000% : 8771.742us 00:08:23.519 90.00000% : 9578.338us 00:08:23.519 95.00000% : 10989.883us 00:08:23.519 98.00000% : 13308.849us 00:08:23.519 99.00000% : 14014.622us 00:08:23.519 99.50000% : 18350.080us 00:08:23.519 99.90000% : 26819.348us 00:08:23.519 99.99000% : 27020.997us 00:08:23.519 99.99900% : 27020.997us 00:08:23.519 99.99990% : 27020.997us 00:08:23.519 99.99999% : 27020.997us 00:08:23.519 00:08:23.519 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:23.519 ================================================================================= 00:08:23.519 1.00000% : 6125.095us 00:08:23.519 10.00000% : 6452.775us 00:08:23.519 25.00000% : 6704.837us 00:08:23.519 50.00000% : 7158.548us 00:08:23.519 75.00000% : 8771.742us 00:08:23.519 90.00000% : 9578.338us 00:08:23.519 95.00000% : 11040.295us 00:08:23.519 98.00000% : 13308.849us 00:08:23.519 99.00000% : 14014.622us 00:08:23.519 99.50000% : 18450.905us 00:08:23.519 99.90000% : 26012.751us 00:08:23.519 99.99000% : 26214.400us 00:08:23.519 99.99900% : 26214.400us 00:08:23.519 99.99990% : 26214.400us 00:08:23.519 99.99999% : 26214.400us 00:08:23.519 00:08:23.519 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:23.519 ================================================================================= 00:08:23.519 1.00000% : 6125.095us 00:08:23.519 10.00000% : 6427.569us 00:08:23.519 25.00000% : 6704.837us 00:08:23.519 50.00000% : 7158.548us 00:08:23.519 75.00000% : 8721.329us 00:08:23.519 90.00000% : 9527.926us 00:08:23.519 95.00000% : 11191.532us 00:08:23.519 98.00000% : 13208.025us 00:08:23.519 99.00000% : 14014.622us 00:08:23.519 99.50000% : 18652.554us 00:08:23.519 99.90000% : 25206.154us 00:08:23.519 99.99000% : 25407.803us 00:08:23.519 99.99900% : 25407.803us 00:08:23.519 99.99990% : 25407.803us 00:08:23.519 99.99999% : 25407.803us 00:08:23.519 00:08:23.519 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:23.519 ============================================================================== 00:08:23.519 Range in us Cumulative IO count 00:08:23.519 4612.726 - 4637.932: 0.0062% ( 1) 00:08:23.519 4637.932 - 4663.138: 0.0246% ( 3) 00:08:23.519 4663.138 - 4688.345: 0.0431% ( 3) 00:08:23.519 4688.345 - 4713.551: 0.0492% ( 1) 00:08:23.519 4713.551 - 4738.757: 0.0554% ( 1) 00:08:23.519 4738.757 - 4763.963: 0.0615% ( 1) 00:08:23.519 4763.963 - 4789.169: 0.0800% ( 3) 00:08:23.519 4789.169 - 4814.375: 0.0861% ( 1) 00:08:23.519 4814.375 - 4839.582: 0.1046% ( 3) 00:08:23.519 4839.582 - 4864.788: 0.1107% ( 1) 00:08:23.519 4864.788 - 4889.994: 0.1230% ( 2) 00:08:23.519 4915.200 - 4940.406: 0.1292% ( 1) 00:08:23.519 4940.406 - 4965.612: 0.1476% ( 3) 00:08:23.519 4965.612 - 4990.818: 0.1599% ( 2) 00:08:23.519 4990.818 - 5016.025: 0.1661% ( 1) 00:08:23.519 5016.025 - 5041.231: 0.1845% ( 3) 00:08:23.519 5041.231 - 5066.437: 0.1907% ( 1) 00:08:23.519 5066.437 - 5091.643: 0.2030% ( 2) 00:08:23.519 5091.643 - 5116.849: 0.2153% ( 2) 00:08:23.519 5116.849 - 5142.055: 0.2276% ( 2) 00:08:23.519 5142.055 - 5167.262: 0.2338% ( 1) 00:08:23.519 5167.262 - 5192.468: 0.2522% ( 3) 00:08:23.519 5192.468 - 5217.674: 0.2584% ( 1) 00:08:23.519 5217.674 - 5242.880: 0.2707% ( 2) 00:08:23.519 5242.880 - 5268.086: 0.2830% ( 2) 00:08:23.519 5268.086 - 5293.292: 0.2953% ( 2) 00:08:23.519 5293.292 - 5318.498: 0.3014% ( 1) 00:08:23.519 5318.498 - 5343.705: 0.3137% ( 2) 00:08:23.519 5343.705 - 5368.911: 0.3260% ( 2) 00:08:23.519 5368.911 - 5394.117: 0.3383% ( 2) 00:08:23.519 5394.117 - 5419.323: 0.3445% ( 1) 00:08:23.519 5419.323 - 5444.529: 0.3568% ( 2) 00:08:23.519 5444.529 - 5469.735: 0.3752% ( 3) 00:08:23.519 5469.735 - 5494.942: 0.3875% ( 2) 00:08:23.519 5494.942 - 5520.148: 0.3937% ( 1) 00:08:23.519 5973.858 - 5999.065: 0.4368% ( 7) 00:08:23.519 5999.065 - 6024.271: 0.5290% ( 15) 00:08:23.519 6024.271 - 6049.477: 0.7689% ( 39) 00:08:23.519 6049.477 - 6074.683: 1.1011% ( 54) 00:08:23.519 6074.683 - 6099.889: 1.5071% ( 66) 00:08:23.519 6099.889 - 6125.095: 1.9870% ( 78) 00:08:23.519 6125.095 - 6150.302: 2.5098% ( 85) 00:08:23.519 6150.302 - 6175.508: 3.0327% ( 85) 00:08:23.519 6175.508 - 6200.714: 3.6417% ( 99) 00:08:23.519 6200.714 - 6225.920: 4.2446% ( 98) 00:08:23.519 6225.920 - 6251.126: 4.9889% ( 121) 00:08:23.519 6251.126 - 6276.332: 5.8132% ( 134) 00:08:23.519 6276.332 - 6301.538: 6.6868% ( 142) 00:08:23.519 6301.538 - 6326.745: 7.6218% ( 152) 00:08:23.519 6326.745 - 6351.951: 8.6245% ( 163) 00:08:23.519 6351.951 - 6377.157: 9.8671% ( 202) 00:08:23.519 6377.157 - 6402.363: 10.8637% ( 162) 00:08:23.519 6402.363 - 6427.569: 11.9956% ( 184) 00:08:23.519 6427.569 - 6452.775: 13.0659% ( 174) 00:08:23.519 6452.775 - 6503.188: 15.4281% ( 384) 00:08:23.519 6503.188 - 6553.600: 17.8580% ( 395) 00:08:23.519 6553.600 - 6604.012: 20.4294% ( 418) 00:08:23.519 6604.012 - 6654.425: 22.9269% ( 406) 00:08:23.519 6654.425 - 6704.837: 25.3875% ( 400) 00:08:23.519 6704.837 - 6755.249: 28.0081% ( 426) 00:08:23.519 6755.249 - 6805.662: 30.6471% ( 429) 00:08:23.519 6805.662 - 6856.074: 33.2739% ( 427) 00:08:23.519 6856.074 - 6906.486: 35.9313% ( 432) 00:08:23.520 6906.486 - 6956.898: 38.4412% ( 408) 00:08:23.520 6956.898 - 7007.311: 41.2217% ( 452) 00:08:23.520 7007.311 - 7057.723: 44.0699% ( 463) 00:08:23.520 7057.723 - 7108.135: 46.4505% ( 387) 00:08:23.520 7108.135 - 7158.548: 48.7328% ( 371) 00:08:23.520 7158.548 - 7208.960: 50.6828% ( 317) 00:08:23.520 7208.960 - 7259.372: 52.5283% ( 300) 00:08:23.520 7259.372 - 7309.785: 54.3184% ( 291) 00:08:23.520 7309.785 - 7360.197: 55.7948% ( 240) 00:08:23.520 7360.197 - 7410.609: 56.9513% ( 188) 00:08:23.520 7410.609 - 7461.022: 57.7571% ( 131) 00:08:23.520 7461.022 - 7511.434: 58.4154% ( 107) 00:08:23.520 7511.434 - 7561.846: 59.0920% ( 110) 00:08:23.520 7561.846 - 7612.258: 59.6088% ( 84) 00:08:23.520 7612.258 - 7662.671: 60.1009% ( 80) 00:08:23.520 7662.671 - 7713.083: 60.6914% ( 96) 00:08:23.520 7713.083 - 7763.495: 61.1897% ( 81) 00:08:23.520 7763.495 - 7813.908: 61.7064% ( 84) 00:08:23.520 7813.908 - 7864.320: 62.3093% ( 98) 00:08:23.520 7864.320 - 7914.732: 62.8383% ( 86) 00:08:23.520 7914.732 - 7965.145: 63.4289% ( 96) 00:08:23.520 7965.145 - 8015.557: 63.9825% ( 90) 00:08:23.520 8015.557 - 8065.969: 64.5608% ( 94) 00:08:23.520 8065.969 - 8116.382: 65.1944% ( 103) 00:08:23.520 8116.382 - 8166.794: 65.7849% ( 96) 00:08:23.520 8166.794 - 8217.206: 66.4678% ( 111) 00:08:23.520 8217.206 - 8267.618: 67.2121% ( 121) 00:08:23.520 8267.618 - 8318.031: 67.9872% ( 126) 00:08:23.520 8318.031 - 8368.443: 68.7685% ( 127) 00:08:23.520 8368.443 - 8418.855: 69.7035% ( 152) 00:08:23.520 8418.855 - 8469.268: 70.6139% ( 148) 00:08:23.520 8469.268 - 8519.680: 71.5797% ( 157) 00:08:23.520 8519.680 - 8570.092: 72.5394% ( 156) 00:08:23.520 8570.092 - 8620.505: 73.5175% ( 159) 00:08:23.520 8620.505 - 8670.917: 74.5079% ( 161) 00:08:23.520 8670.917 - 8721.329: 75.5413% ( 168) 00:08:23.520 8721.329 - 8771.742: 76.6425% ( 179) 00:08:23.520 8771.742 - 8822.154: 77.6452% ( 163) 00:08:23.520 8822.154 - 8872.566: 78.7279% ( 176) 00:08:23.520 8872.566 - 8922.978: 79.7490% ( 166) 00:08:23.520 8922.978 - 8973.391: 80.7517% ( 163) 00:08:23.520 8973.391 - 9023.803: 81.7483% ( 162) 00:08:23.520 9023.803 - 9074.215: 82.7141% ( 157) 00:08:23.520 9074.215 - 9124.628: 83.5199% ( 131) 00:08:23.520 9124.628 - 9175.040: 84.4488% ( 151) 00:08:23.520 9175.040 - 9225.452: 85.3346% ( 144) 00:08:23.520 9225.452 - 9275.865: 86.1836% ( 138) 00:08:23.520 9275.865 - 9326.277: 87.0202% ( 136) 00:08:23.520 9326.277 - 9376.689: 87.7522% ( 119) 00:08:23.520 9376.689 - 9427.102: 88.4658% ( 116) 00:08:23.520 9427.102 - 9477.514: 89.0010% ( 87) 00:08:23.520 9477.514 - 9527.926: 89.5054% ( 82) 00:08:23.520 9527.926 - 9578.338: 89.9852% ( 78) 00:08:23.520 9578.338 - 9628.751: 90.3912% ( 66) 00:08:23.520 9628.751 - 9679.163: 90.8157% ( 69) 00:08:23.520 9679.163 - 9729.575: 91.1848% ( 60) 00:08:23.520 9729.575 - 9779.988: 91.4924% ( 50) 00:08:23.520 9779.988 - 9830.400: 91.7692% ( 45) 00:08:23.520 9830.400 - 9880.812: 91.9968% ( 37) 00:08:23.520 9880.812 - 9931.225: 92.1506% ( 25) 00:08:23.520 9931.225 - 9981.637: 92.2798% ( 21) 00:08:23.520 9981.637 - 10032.049: 92.3720% ( 15) 00:08:23.520 10032.049 - 10082.462: 92.4582% ( 14) 00:08:23.520 10082.462 - 10132.874: 92.5504% ( 15) 00:08:23.520 10132.874 - 10183.286: 92.6243% ( 12) 00:08:23.520 10183.286 - 10233.698: 92.6981% ( 12) 00:08:23.520 10233.698 - 10284.111: 92.8396% ( 23) 00:08:23.520 10284.111 - 10334.523: 92.9134% ( 12) 00:08:23.520 10334.523 - 10384.935: 93.0057% ( 15) 00:08:23.520 10384.935 - 10435.348: 93.0795% ( 12) 00:08:23.520 10435.348 - 10485.760: 93.1533% ( 12) 00:08:23.520 10485.760 - 10536.172: 93.2148% ( 10) 00:08:23.520 10536.172 - 10586.585: 93.2886% ( 12) 00:08:23.520 10586.585 - 10636.997: 93.3686% ( 13) 00:08:23.520 10636.997 - 10687.409: 93.4486% ( 13) 00:08:23.520 10687.409 - 10737.822: 93.5408% ( 15) 00:08:23.520 10737.822 - 10788.234: 93.6208% ( 13) 00:08:23.520 10788.234 - 10838.646: 93.7315% ( 18) 00:08:23.520 10838.646 - 10889.058: 93.8607% ( 21) 00:08:23.520 10889.058 - 10939.471: 93.9776% ( 19) 00:08:23.520 10939.471 - 10989.883: 94.0822% ( 17) 00:08:23.520 10989.883 - 11040.295: 94.1806% ( 16) 00:08:23.520 11040.295 - 11090.708: 94.3159% ( 22) 00:08:23.520 11090.708 - 11141.120: 94.4513% ( 22) 00:08:23.520 11141.120 - 11191.532: 94.5743% ( 20) 00:08:23.520 11191.532 - 11241.945: 94.6973% ( 20) 00:08:23.520 11241.945 - 11292.357: 94.8327% ( 22) 00:08:23.520 11292.357 - 11342.769: 94.9373% ( 17) 00:08:23.520 11342.769 - 11393.182: 95.0172% ( 13) 00:08:23.520 11393.182 - 11443.594: 95.1095% ( 15) 00:08:23.520 11443.594 - 11494.006: 95.1833% ( 12) 00:08:23.520 11494.006 - 11544.418: 95.2633% ( 13) 00:08:23.520 11544.418 - 11594.831: 95.3740% ( 18) 00:08:23.520 11594.831 - 11645.243: 95.4970% ( 20) 00:08:23.520 11645.243 - 11695.655: 95.5955% ( 16) 00:08:23.520 11695.655 - 11746.068: 95.6877% ( 15) 00:08:23.520 11746.068 - 11796.480: 95.7616% ( 12) 00:08:23.520 11796.480 - 11846.892: 95.8846% ( 20) 00:08:23.520 11846.892 - 11897.305: 95.9830% ( 16) 00:08:23.520 11897.305 - 11947.717: 96.0568% ( 12) 00:08:23.520 11947.717 - 11998.129: 96.1430% ( 14) 00:08:23.520 11998.129 - 12048.542: 96.2106% ( 11) 00:08:23.520 12048.542 - 12098.954: 96.2844% ( 12) 00:08:23.520 12098.954 - 12149.366: 96.3337% ( 8) 00:08:23.520 12149.366 - 12199.778: 96.3829% ( 8) 00:08:23.520 12199.778 - 12250.191: 96.4321% ( 8) 00:08:23.520 12250.191 - 12300.603: 96.4813% ( 8) 00:08:23.520 12300.603 - 12351.015: 96.5367% ( 9) 00:08:23.520 12351.015 - 12401.428: 96.6043% ( 11) 00:08:23.520 12401.428 - 12451.840: 96.6720% ( 11) 00:08:23.520 12451.840 - 12502.252: 96.7581% ( 14) 00:08:23.520 12502.252 - 12552.665: 96.8135% ( 9) 00:08:23.520 12552.665 - 12603.077: 96.8935% ( 13) 00:08:23.520 12603.077 - 12653.489: 96.9488% ( 9) 00:08:23.520 12653.489 - 12703.902: 97.0226% ( 12) 00:08:23.520 12703.902 - 12754.314: 97.0780% ( 9) 00:08:23.520 12754.314 - 12804.726: 97.1272% ( 8) 00:08:23.520 12804.726 - 12855.138: 97.1887% ( 10) 00:08:23.520 12855.138 - 12905.551: 97.2379% ( 8) 00:08:23.520 12905.551 - 13006.375: 97.3610% ( 20) 00:08:23.520 13006.375 - 13107.200: 97.4348% ( 12) 00:08:23.520 13107.200 - 13208.025: 97.5517% ( 19) 00:08:23.520 13208.025 - 13308.849: 97.6993% ( 24) 00:08:23.520 13308.849 - 13409.674: 97.8839% ( 30) 00:08:23.520 13409.674 - 13510.498: 98.0623% ( 29) 00:08:23.520 13510.498 - 13611.323: 98.2530% ( 31) 00:08:23.520 13611.323 - 13712.148: 98.4129% ( 26) 00:08:23.520 13712.148 - 13812.972: 98.5544% ( 23) 00:08:23.520 13812.972 - 13913.797: 98.7020% ( 24) 00:08:23.520 13913.797 - 14014.622: 98.8066% ( 17) 00:08:23.520 14014.622 - 14115.446: 98.9050% ( 16) 00:08:23.520 14115.446 - 14216.271: 99.0096% ( 17) 00:08:23.520 14216.271 - 14317.095: 99.1019% ( 15) 00:08:23.520 14317.095 - 14417.920: 99.1695% ( 11) 00:08:23.520 14417.920 - 14518.745: 99.2126% ( 7) 00:08:23.520 16736.886 - 16837.711: 99.2188% ( 1) 00:08:23.520 16837.711 - 16938.535: 99.2557% ( 6) 00:08:23.520 16938.535 - 17039.360: 99.2926% ( 6) 00:08:23.520 17039.360 - 17140.185: 99.3233% ( 5) 00:08:23.520 17140.185 - 17241.009: 99.3541% ( 5) 00:08:23.520 17241.009 - 17341.834: 99.3910% ( 6) 00:08:23.520 17341.834 - 17442.658: 99.4218% ( 5) 00:08:23.520 17442.658 - 17543.483: 99.4587% ( 6) 00:08:23.520 17543.483 - 17644.308: 99.4833% ( 4) 00:08:23.520 17644.308 - 17745.132: 99.5202% ( 6) 00:08:23.520 17745.132 - 17845.957: 99.5509% ( 5) 00:08:23.520 17845.957 - 17946.782: 99.5817% ( 5) 00:08:23.520 17946.782 - 18047.606: 99.5940% ( 2) 00:08:23.520 18047.606 - 18148.431: 99.6063% ( 2) 00:08:23.520 25609.452 - 25710.277: 99.6186% ( 2) 00:08:23.520 25710.277 - 25811.102: 99.6371% ( 3) 00:08:23.520 25811.102 - 26012.751: 99.6740% ( 6) 00:08:23.520 26012.751 - 26214.400: 99.7047% ( 5) 00:08:23.520 26214.400 - 26416.049: 99.7355% ( 5) 00:08:23.520 26416.049 - 26617.698: 99.7724% ( 6) 00:08:23.520 26617.698 - 26819.348: 99.8031% ( 5) 00:08:23.520 26819.348 - 27020.997: 99.8401% ( 6) 00:08:23.520 27020.997 - 27222.646: 99.8770% ( 6) 00:08:23.520 27222.646 - 27424.295: 99.9077% ( 5) 00:08:23.520 27424.295 - 27625.945: 99.9446% ( 6) 00:08:23.520 27625.945 - 27827.594: 99.9754% ( 5) 00:08:23.520 27827.594 - 28029.243: 100.0000% ( 4) 00:08:23.520 00:08:23.520 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:23.520 ============================================================================== 00:08:23.520 Range in us Cumulative IO count 00:08:23.520 4461.489 - 4486.695: 0.0062% ( 1) 00:08:23.520 4486.695 - 4511.902: 0.0246% ( 3) 00:08:23.520 4511.902 - 4537.108: 0.0369% ( 2) 00:08:23.520 4537.108 - 4562.314: 0.0492% ( 2) 00:08:23.520 4562.314 - 4587.520: 0.0615% ( 2) 00:08:23.520 4587.520 - 4612.726: 0.0738% ( 2) 00:08:23.521 4612.726 - 4637.932: 0.0861% ( 2) 00:08:23.521 4637.932 - 4663.138: 0.0984% ( 2) 00:08:23.521 4663.138 - 4688.345: 0.1107% ( 2) 00:08:23.521 4688.345 - 4713.551: 0.1292% ( 3) 00:08:23.521 4713.551 - 4738.757: 0.1415% ( 2) 00:08:23.521 4738.757 - 4763.963: 0.1538% ( 2) 00:08:23.521 4763.963 - 4789.169: 0.1661% ( 2) 00:08:23.521 4789.169 - 4814.375: 0.1784% ( 2) 00:08:23.521 4814.375 - 4839.582: 0.1907% ( 2) 00:08:23.521 4839.582 - 4864.788: 0.2030% ( 2) 00:08:23.521 4864.788 - 4889.994: 0.2215% ( 3) 00:08:23.521 4889.994 - 4915.200: 0.2338% ( 2) 00:08:23.521 4915.200 - 4940.406: 0.2461% ( 2) 00:08:23.521 4940.406 - 4965.612: 0.2584% ( 2) 00:08:23.521 4965.612 - 4990.818: 0.2768% ( 3) 00:08:23.521 4990.818 - 5016.025: 0.2891% ( 2) 00:08:23.521 5016.025 - 5041.231: 0.3014% ( 2) 00:08:23.521 5041.231 - 5066.437: 0.3137% ( 2) 00:08:23.521 5066.437 - 5091.643: 0.3260% ( 2) 00:08:23.521 5091.643 - 5116.849: 0.3445% ( 3) 00:08:23.521 5116.849 - 5142.055: 0.3568% ( 2) 00:08:23.521 5142.055 - 5167.262: 0.3691% ( 2) 00:08:23.521 5167.262 - 5192.468: 0.3814% ( 2) 00:08:23.521 5192.468 - 5217.674: 0.3937% ( 2) 00:08:23.521 6049.477 - 6074.683: 0.4122% ( 3) 00:08:23.521 6074.683 - 6099.889: 0.5536% ( 23) 00:08:23.521 6099.889 - 6125.095: 0.8182% ( 43) 00:08:23.521 6125.095 - 6150.302: 1.2242% ( 66) 00:08:23.521 6150.302 - 6175.508: 1.6855% ( 75) 00:08:23.521 6175.508 - 6200.714: 2.2453% ( 91) 00:08:23.521 6200.714 - 6225.920: 2.9097% ( 108) 00:08:23.521 6225.920 - 6251.126: 3.4879% ( 94) 00:08:23.521 6251.126 - 6276.332: 4.0354% ( 89) 00:08:23.521 6276.332 - 6301.538: 4.7244% ( 112) 00:08:23.521 6301.538 - 6326.745: 5.4318% ( 115) 00:08:23.521 6326.745 - 6351.951: 6.2869% ( 139) 00:08:23.521 6351.951 - 6377.157: 7.3142% ( 167) 00:08:23.521 6377.157 - 6402.363: 8.5261% ( 197) 00:08:23.521 6402.363 - 6427.569: 9.7133% ( 193) 00:08:23.521 6427.569 - 6452.775: 10.9437% ( 200) 00:08:23.521 6452.775 - 6503.188: 13.6442% ( 439) 00:08:23.521 6503.188 - 6553.600: 16.5047% ( 465) 00:08:23.521 6553.600 - 6604.012: 19.4267% ( 475) 00:08:23.521 6604.012 - 6654.425: 22.1703% ( 446) 00:08:23.521 6654.425 - 6704.837: 25.1661% ( 487) 00:08:23.521 6704.837 - 6755.249: 28.1742% ( 489) 00:08:23.521 6755.249 - 6805.662: 31.0901% ( 474) 00:08:23.521 6805.662 - 6856.074: 34.0613% ( 483) 00:08:23.521 6856.074 - 6906.486: 37.1125% ( 496) 00:08:23.521 6906.486 - 6956.898: 40.2559% ( 511) 00:08:23.521 6956.898 - 7007.311: 43.2394% ( 485) 00:08:23.521 7007.311 - 7057.723: 46.0999% ( 465) 00:08:23.521 7057.723 - 7108.135: 48.4190% ( 377) 00:08:23.521 7108.135 - 7158.548: 50.6275% ( 359) 00:08:23.521 7158.548 - 7208.960: 52.6759% ( 333) 00:08:23.521 7208.960 - 7259.372: 54.4045% ( 281) 00:08:23.521 7259.372 - 7309.785: 55.7517% ( 219) 00:08:23.521 7309.785 - 7360.197: 56.7667% ( 165) 00:08:23.521 7360.197 - 7410.609: 57.6218% ( 139) 00:08:23.521 7410.609 - 7461.022: 58.3661% ( 121) 00:08:23.521 7461.022 - 7511.434: 58.9813% ( 100) 00:08:23.521 7511.434 - 7561.846: 59.5288% ( 89) 00:08:23.521 7561.846 - 7612.258: 60.0517% ( 85) 00:08:23.521 7612.258 - 7662.671: 60.5192% ( 76) 00:08:23.521 7662.671 - 7713.083: 60.9744% ( 74) 00:08:23.521 7713.083 - 7763.495: 61.4788% ( 82) 00:08:23.521 7763.495 - 7813.908: 62.0263% ( 89) 00:08:23.521 7813.908 - 7864.320: 62.5554% ( 86) 00:08:23.521 7864.320 - 7914.732: 63.1090% ( 90) 00:08:23.521 7914.732 - 7965.145: 63.7180% ( 99) 00:08:23.521 7965.145 - 8015.557: 64.3209% ( 98) 00:08:23.521 8015.557 - 8065.969: 64.9176% ( 97) 00:08:23.521 8065.969 - 8116.382: 65.4712% ( 90) 00:08:23.521 8116.382 - 8166.794: 66.0433% ( 93) 00:08:23.521 8166.794 - 8217.206: 66.6892% ( 105) 00:08:23.521 8217.206 - 8267.618: 67.3105% ( 101) 00:08:23.521 8267.618 - 8318.031: 68.0241% ( 116) 00:08:23.521 8318.031 - 8368.443: 68.8853% ( 140) 00:08:23.521 8368.443 - 8418.855: 69.6112% ( 118) 00:08:23.521 8418.855 - 8469.268: 70.4417% ( 135) 00:08:23.521 8469.268 - 8519.680: 71.3583% ( 149) 00:08:23.521 8519.680 - 8570.092: 72.2995% ( 153) 00:08:23.521 8570.092 - 8620.505: 73.3022% ( 163) 00:08:23.521 8620.505 - 8670.917: 74.3356% ( 168) 00:08:23.521 8670.917 - 8721.329: 75.2830% ( 154) 00:08:23.521 8721.329 - 8771.742: 76.2918% ( 164) 00:08:23.521 8771.742 - 8822.154: 77.3376% ( 170) 00:08:23.521 8822.154 - 8872.566: 78.3711% ( 168) 00:08:23.521 8872.566 - 8922.978: 79.3246% ( 155) 00:08:23.521 8922.978 - 8973.391: 80.3211% ( 162) 00:08:23.521 8973.391 - 9023.803: 81.3730% ( 171) 00:08:23.521 9023.803 - 9074.215: 82.3880% ( 165) 00:08:23.521 9074.215 - 9124.628: 83.3784% ( 161) 00:08:23.521 9124.628 - 9175.040: 84.4119% ( 168) 00:08:23.521 9175.040 - 9225.452: 85.3962% ( 160) 00:08:23.521 9225.452 - 9275.865: 86.2574% ( 140) 00:08:23.521 9275.865 - 9326.277: 87.0017% ( 121) 00:08:23.521 9326.277 - 9376.689: 87.7953% ( 129) 00:08:23.521 9376.689 - 9427.102: 88.4535% ( 107) 00:08:23.521 9427.102 - 9477.514: 89.0748% ( 101) 00:08:23.521 9477.514 - 9527.926: 89.6407% ( 92) 00:08:23.521 9527.926 - 9578.338: 90.1575% ( 84) 00:08:23.521 9578.338 - 9628.751: 90.6558% ( 81) 00:08:23.521 9628.751 - 9679.163: 91.0987% ( 72) 00:08:23.521 9679.163 - 9729.575: 91.4124% ( 51) 00:08:23.521 9729.575 - 9779.988: 91.6769% ( 43) 00:08:23.521 9779.988 - 9830.400: 91.8922% ( 35) 00:08:23.521 9830.400 - 9880.812: 92.0829% ( 31) 00:08:23.521 9880.812 - 9931.225: 92.2244% ( 23) 00:08:23.521 9931.225 - 9981.637: 92.3597% ( 22) 00:08:23.521 9981.637 - 10032.049: 92.5135% ( 25) 00:08:23.521 10032.049 - 10082.462: 92.6366% ( 20) 00:08:23.521 10082.462 - 10132.874: 92.7350% ( 16) 00:08:23.521 10132.874 - 10183.286: 92.8150% ( 13) 00:08:23.521 10183.286 - 10233.698: 92.9011% ( 14) 00:08:23.521 10233.698 - 10284.111: 92.9811% ( 13) 00:08:23.521 10284.111 - 10334.523: 93.0426% ( 10) 00:08:23.521 10334.523 - 10384.935: 93.1164% ( 12) 00:08:23.521 10384.935 - 10435.348: 93.1902% ( 12) 00:08:23.521 10435.348 - 10485.760: 93.2763% ( 14) 00:08:23.521 10485.760 - 10536.172: 93.3501% ( 12) 00:08:23.521 10536.172 - 10586.585: 93.4301% ( 13) 00:08:23.521 10586.585 - 10636.997: 93.5101% ( 13) 00:08:23.521 10636.997 - 10687.409: 93.5962% ( 14) 00:08:23.521 10687.409 - 10737.822: 93.7008% ( 17) 00:08:23.521 10737.822 - 10788.234: 93.7869% ( 14) 00:08:23.521 10788.234 - 10838.646: 93.8730% ( 14) 00:08:23.521 10838.646 - 10889.058: 93.9653% ( 15) 00:08:23.521 10889.058 - 10939.471: 94.0576% ( 15) 00:08:23.521 10939.471 - 10989.883: 94.1683% ( 18) 00:08:23.521 10989.883 - 11040.295: 94.2667% ( 16) 00:08:23.521 11040.295 - 11090.708: 94.3590% ( 15) 00:08:23.521 11090.708 - 11141.120: 94.4697% ( 18) 00:08:23.521 11141.120 - 11191.532: 94.5805% ( 18) 00:08:23.521 11191.532 - 11241.945: 94.6912% ( 18) 00:08:23.521 11241.945 - 11292.357: 94.7958% ( 17) 00:08:23.521 11292.357 - 11342.769: 94.9065% ( 18) 00:08:23.521 11342.769 - 11393.182: 95.0111% ( 17) 00:08:23.521 11393.182 - 11443.594: 95.1095% ( 16) 00:08:23.521 11443.594 - 11494.006: 95.2264% ( 19) 00:08:23.521 11494.006 - 11544.418: 95.3310% ( 17) 00:08:23.521 11544.418 - 11594.831: 95.4171% ( 14) 00:08:23.521 11594.831 - 11645.243: 95.4786% ( 10) 00:08:23.521 11645.243 - 11695.655: 95.5401% ( 10) 00:08:23.521 11695.655 - 11746.068: 95.6139% ( 12) 00:08:23.521 11746.068 - 11796.480: 95.7185% ( 17) 00:08:23.521 11796.480 - 11846.892: 95.7985% ( 13) 00:08:23.521 11846.892 - 11897.305: 95.8661% ( 11) 00:08:23.521 11897.305 - 11947.717: 95.9277% ( 10) 00:08:23.521 11947.717 - 11998.129: 95.9892% ( 10) 00:08:23.521 11998.129 - 12048.542: 96.0630% ( 12) 00:08:23.521 12048.542 - 12098.954: 96.1184% ( 9) 00:08:23.521 12098.954 - 12149.366: 96.2168% ( 16) 00:08:23.521 12149.366 - 12199.778: 96.2968% ( 13) 00:08:23.521 12199.778 - 12250.191: 96.3767% ( 13) 00:08:23.521 12250.191 - 12300.603: 96.4813% ( 17) 00:08:23.521 12300.603 - 12351.015: 96.5920% ( 18) 00:08:23.521 12351.015 - 12401.428: 96.7089% ( 19) 00:08:23.521 12401.428 - 12451.840: 96.8012% ( 15) 00:08:23.521 12451.840 - 12502.252: 96.8935% ( 15) 00:08:23.521 12502.252 - 12552.665: 96.9857% ( 15) 00:08:23.521 12552.665 - 12603.077: 97.0719% ( 14) 00:08:23.521 12603.077 - 12653.489: 97.1703% ( 16) 00:08:23.521 12653.489 - 12703.902: 97.2564% ( 14) 00:08:23.521 12703.902 - 12754.314: 97.3241% ( 11) 00:08:23.521 12754.314 - 12804.726: 97.3979% ( 12) 00:08:23.521 12804.726 - 12855.138: 97.4594% ( 10) 00:08:23.521 12855.138 - 12905.551: 97.5394% ( 13) 00:08:23.521 12905.551 - 13006.375: 97.7239% ( 30) 00:08:23.521 13006.375 - 13107.200: 97.8777% ( 25) 00:08:23.521 13107.200 - 13208.025: 97.9823% ( 17) 00:08:23.521 13208.025 - 13308.849: 98.1176% ( 22) 00:08:23.521 13308.849 - 13409.674: 98.2222% ( 17) 00:08:23.521 13409.674 - 13510.498: 98.3698% ( 24) 00:08:23.521 13510.498 - 13611.323: 98.5052% ( 22) 00:08:23.522 13611.323 - 13712.148: 98.6467% ( 23) 00:08:23.522 13712.148 - 13812.972: 98.7881% ( 23) 00:08:23.522 13812.972 - 13913.797: 98.8866% ( 16) 00:08:23.522 13913.797 - 14014.622: 98.9604% ( 12) 00:08:23.522 14014.622 - 14115.446: 99.0404% ( 13) 00:08:23.522 14115.446 - 14216.271: 99.1142% ( 12) 00:08:23.522 14216.271 - 14317.095: 99.1695% ( 9) 00:08:23.522 14317.095 - 14417.920: 99.2126% ( 7) 00:08:23.522 16535.237 - 16636.062: 99.2311% ( 3) 00:08:23.522 16636.062 - 16736.886: 99.2495% ( 3) 00:08:23.522 16736.886 - 16837.711: 99.2680% ( 3) 00:08:23.522 16837.711 - 16938.535: 99.2864% ( 3) 00:08:23.522 16938.535 - 17039.360: 99.3110% ( 4) 00:08:23.522 17039.360 - 17140.185: 99.3233% ( 2) 00:08:23.522 17140.185 - 17241.009: 99.3479% ( 4) 00:08:23.522 17241.009 - 17341.834: 99.3664% ( 3) 00:08:23.522 17341.834 - 17442.658: 99.3848% ( 3) 00:08:23.522 17442.658 - 17543.483: 99.4033% ( 3) 00:08:23.522 17543.483 - 17644.308: 99.4279% ( 4) 00:08:23.522 17644.308 - 17745.132: 99.4464% ( 3) 00:08:23.522 17745.132 - 17845.957: 99.4648% ( 3) 00:08:23.522 17845.957 - 17946.782: 99.4894% ( 4) 00:08:23.522 17946.782 - 18047.606: 99.5079% ( 3) 00:08:23.522 18047.606 - 18148.431: 99.5263% ( 3) 00:08:23.522 18148.431 - 18249.255: 99.5448% ( 3) 00:08:23.522 18249.255 - 18350.080: 99.5694% ( 4) 00:08:23.522 18350.080 - 18450.905: 99.5878% ( 3) 00:08:23.522 18450.905 - 18551.729: 99.6063% ( 3) 00:08:23.522 26012.751 - 26214.400: 99.6309% ( 4) 00:08:23.522 26214.400 - 26416.049: 99.6740% ( 7) 00:08:23.522 26416.049 - 26617.698: 99.7109% ( 6) 00:08:23.522 26617.698 - 26819.348: 99.7478% ( 6) 00:08:23.522 26819.348 - 27020.997: 99.7908% ( 7) 00:08:23.522 27020.997 - 27222.646: 99.8278% ( 6) 00:08:23.522 27222.646 - 27424.295: 99.8708% ( 7) 00:08:23.522 27424.295 - 27625.945: 99.9077% ( 6) 00:08:23.522 27625.945 - 27827.594: 99.9508% ( 7) 00:08:23.522 27827.594 - 28029.243: 99.9877% ( 6) 00:08:23.522 28029.243 - 28230.892: 100.0000% ( 2) 00:08:23.522 00:08:23.522 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:23.522 ============================================================================== 00:08:23.522 Range in us Cumulative IO count 00:08:23.522 3982.572 - 4007.778: 0.0185% ( 3) 00:08:23.522 4007.778 - 4032.985: 0.0369% ( 3) 00:08:23.522 4032.985 - 4058.191: 0.0492% ( 2) 00:08:23.522 4058.191 - 4083.397: 0.0615% ( 2) 00:08:23.522 4083.397 - 4108.603: 0.0738% ( 2) 00:08:23.522 4108.603 - 4133.809: 0.0861% ( 2) 00:08:23.522 4133.809 - 4159.015: 0.0984% ( 2) 00:08:23.522 4159.015 - 4184.222: 0.1107% ( 2) 00:08:23.522 4184.222 - 4209.428: 0.1230% ( 2) 00:08:23.522 4209.428 - 4234.634: 0.1353% ( 2) 00:08:23.522 4234.634 - 4259.840: 0.1476% ( 2) 00:08:23.522 4259.840 - 4285.046: 0.1661% ( 3) 00:08:23.522 4285.046 - 4310.252: 0.1784% ( 2) 00:08:23.522 4310.252 - 4335.458: 0.1907% ( 2) 00:08:23.522 4335.458 - 4360.665: 0.2030% ( 2) 00:08:23.522 4360.665 - 4385.871: 0.2153% ( 2) 00:08:23.522 4385.871 - 4411.077: 0.2276% ( 2) 00:08:23.522 4411.077 - 4436.283: 0.2399% ( 2) 00:08:23.522 4436.283 - 4461.489: 0.2584% ( 3) 00:08:23.522 4461.489 - 4486.695: 0.2707% ( 2) 00:08:23.522 4486.695 - 4511.902: 0.2830% ( 2) 00:08:23.522 4511.902 - 4537.108: 0.2953% ( 2) 00:08:23.522 4537.108 - 4562.314: 0.3076% ( 2) 00:08:23.522 4562.314 - 4587.520: 0.3199% ( 2) 00:08:23.522 4587.520 - 4612.726: 0.3383% ( 3) 00:08:23.522 4612.726 - 4637.932: 0.3506% ( 2) 00:08:23.522 4637.932 - 4663.138: 0.3629% ( 2) 00:08:23.522 4663.138 - 4688.345: 0.3752% ( 2) 00:08:23.522 4688.345 - 4713.551: 0.3875% ( 2) 00:08:23.522 4713.551 - 4738.757: 0.3937% ( 1) 00:08:23.522 5520.148 - 5545.354: 0.4060% ( 2) 00:08:23.522 5545.354 - 5570.560: 0.4183% ( 2) 00:08:23.522 5570.560 - 5595.766: 0.4306% ( 2) 00:08:23.522 5595.766 - 5620.972: 0.4429% ( 2) 00:08:23.522 5620.972 - 5646.178: 0.4552% ( 2) 00:08:23.522 5646.178 - 5671.385: 0.4675% ( 2) 00:08:23.522 5671.385 - 5696.591: 0.4921% ( 4) 00:08:23.522 5696.591 - 5721.797: 0.5044% ( 2) 00:08:23.522 5721.797 - 5747.003: 0.5167% ( 2) 00:08:23.522 5747.003 - 5772.209: 0.5290% ( 2) 00:08:23.522 5772.209 - 5797.415: 0.5413% ( 2) 00:08:23.522 5797.415 - 5822.622: 0.5536% ( 2) 00:08:23.522 5822.622 - 5847.828: 0.5659% ( 2) 00:08:23.522 5847.828 - 5873.034: 0.5782% ( 2) 00:08:23.522 5873.034 - 5898.240: 0.5906% ( 2) 00:08:23.522 5898.240 - 5923.446: 0.6029% ( 2) 00:08:23.522 5923.446 - 5948.652: 0.6152% ( 2) 00:08:23.522 5948.652 - 5973.858: 0.6275% ( 2) 00:08:23.522 5973.858 - 5999.065: 0.6398% ( 2) 00:08:23.522 5999.065 - 6024.271: 0.6521% ( 2) 00:08:23.522 6024.271 - 6049.477: 0.6828% ( 5) 00:08:23.522 6049.477 - 6074.683: 0.7013% ( 3) 00:08:23.522 6074.683 - 6099.889: 0.7382% ( 6) 00:08:23.522 6099.889 - 6125.095: 0.9535% ( 35) 00:08:23.522 6125.095 - 6150.302: 1.2180% ( 43) 00:08:23.522 6150.302 - 6175.508: 1.7594% ( 88) 00:08:23.522 6175.508 - 6200.714: 2.4176% ( 107) 00:08:23.522 6200.714 - 6225.920: 3.0327% ( 100) 00:08:23.522 6225.920 - 6251.126: 3.8201% ( 128) 00:08:23.522 6251.126 - 6276.332: 4.4845% ( 108) 00:08:23.522 6276.332 - 6301.538: 5.0812% ( 97) 00:08:23.522 6301.538 - 6326.745: 5.7640% ( 111) 00:08:23.522 6326.745 - 6351.951: 6.6499% ( 144) 00:08:23.522 6351.951 - 6377.157: 7.6280% ( 159) 00:08:23.522 6377.157 - 6402.363: 8.6922% ( 173) 00:08:23.522 6402.363 - 6427.569: 9.8425% ( 187) 00:08:23.522 6427.569 - 6452.775: 11.1651% ( 215) 00:08:23.522 6452.775 - 6503.188: 13.6688% ( 407) 00:08:23.522 6503.188 - 6553.600: 16.4370% ( 450) 00:08:23.522 6553.600 - 6604.012: 19.2667% ( 460) 00:08:23.522 6604.012 - 6654.425: 22.2072% ( 478) 00:08:23.522 6654.425 - 6704.837: 25.1969% ( 486) 00:08:23.522 6704.837 - 6755.249: 28.1435% ( 479) 00:08:23.522 6755.249 - 6805.662: 31.0901% ( 479) 00:08:23.522 6805.662 - 6856.074: 34.1535% ( 498) 00:08:23.522 6856.074 - 6906.486: 37.2109% ( 497) 00:08:23.522 6906.486 - 6956.898: 40.3236% ( 506) 00:08:23.522 6956.898 - 7007.311: 43.4424% ( 507) 00:08:23.522 7007.311 - 7057.723: 46.2721% ( 460) 00:08:23.522 7057.723 - 7108.135: 48.7020% ( 395) 00:08:23.522 7108.135 - 7158.548: 50.9904% ( 372) 00:08:23.522 7158.548 - 7208.960: 53.1435% ( 350) 00:08:23.522 7208.960 - 7259.372: 54.9705% ( 297) 00:08:23.522 7259.372 - 7309.785: 56.3853% ( 230) 00:08:23.522 7309.785 - 7360.197: 57.5357% ( 187) 00:08:23.522 7360.197 - 7410.609: 58.4338% ( 146) 00:08:23.522 7410.609 - 7461.022: 59.2335% ( 130) 00:08:23.522 7461.022 - 7511.434: 59.9717% ( 120) 00:08:23.522 7511.434 - 7561.846: 60.5130% ( 88) 00:08:23.522 7561.846 - 7612.258: 60.9867% ( 77) 00:08:23.522 7612.258 - 7662.671: 61.4481% ( 75) 00:08:23.522 7662.671 - 7713.083: 61.9587% ( 83) 00:08:23.522 7713.083 - 7763.495: 62.5123% ( 90) 00:08:23.522 7763.495 - 7813.908: 63.0044% ( 80) 00:08:23.522 7813.908 - 7864.320: 63.5150% ( 83) 00:08:23.522 7864.320 - 7914.732: 63.9702% ( 74) 00:08:23.522 7914.732 - 7965.145: 64.4254% ( 74) 00:08:23.522 7965.145 - 8015.557: 64.8499% ( 69) 00:08:23.522 8015.557 - 8065.969: 65.4097% ( 91) 00:08:23.522 8065.969 - 8116.382: 65.9264% ( 84) 00:08:23.522 8116.382 - 8166.794: 66.4924% ( 92) 00:08:23.522 8166.794 - 8217.206: 67.0583% ( 92) 00:08:23.522 8217.206 - 8267.618: 67.6243% ( 92) 00:08:23.522 8267.618 - 8318.031: 68.2210% ( 97) 00:08:23.522 8318.031 - 8368.443: 68.8300% ( 99) 00:08:23.522 8368.443 - 8418.855: 69.4820% ( 106) 00:08:23.522 8418.855 - 8469.268: 70.1280% ( 105) 00:08:23.522 8469.268 - 8519.680: 70.9953% ( 141) 00:08:23.522 8519.680 - 8570.092: 71.8750% ( 143) 00:08:23.522 8570.092 - 8620.505: 72.7916% ( 149) 00:08:23.522 8620.505 - 8670.917: 73.6590% ( 141) 00:08:23.522 8670.917 - 8721.329: 74.5571% ( 146) 00:08:23.522 8721.329 - 8771.742: 75.5413% ( 160) 00:08:23.522 8771.742 - 8822.154: 76.5563% ( 165) 00:08:23.522 8822.154 - 8872.566: 77.6267% ( 174) 00:08:23.522 8872.566 - 8922.978: 78.6479% ( 166) 00:08:23.522 8922.978 - 8973.391: 79.6937% ( 170) 00:08:23.522 8973.391 - 9023.803: 80.7087% ( 165) 00:08:23.522 9023.803 - 9074.215: 81.6991% ( 161) 00:08:23.522 9074.215 - 9124.628: 82.7018% ( 163) 00:08:23.522 9124.628 - 9175.040: 83.6799% ( 159) 00:08:23.522 9175.040 - 9225.452: 84.5842% ( 147) 00:08:23.522 9225.452 - 9275.865: 85.4884% ( 147) 00:08:23.522 9275.865 - 9326.277: 86.3558% ( 141) 00:08:23.522 9326.277 - 9376.689: 87.2047% ( 138) 00:08:23.522 9376.689 - 9427.102: 88.0106% ( 131) 00:08:23.522 9427.102 - 9477.514: 88.6811% ( 109) 00:08:23.522 9477.514 - 9527.926: 89.3024% ( 101) 00:08:23.522 9527.926 - 9578.338: 89.8130% ( 83) 00:08:23.522 9578.338 - 9628.751: 90.3851% ( 93) 00:08:23.522 9628.751 - 9679.163: 90.8526% ( 76) 00:08:23.522 9679.163 - 9729.575: 91.2402% ( 63) 00:08:23.522 9729.575 - 9779.988: 91.5600% ( 52) 00:08:23.523 9779.988 - 9830.400: 91.8430% ( 46) 00:08:23.523 9830.400 - 9880.812: 92.1014% ( 42) 00:08:23.523 9880.812 - 9931.225: 92.3474% ( 40) 00:08:23.523 9931.225 - 9981.637: 92.5504% ( 33) 00:08:23.523 9981.637 - 10032.049: 92.6981% ( 24) 00:08:23.523 10032.049 - 10082.462: 92.8396% ( 23) 00:08:23.523 10082.462 - 10132.874: 92.9688% ( 21) 00:08:23.523 10132.874 - 10183.286: 93.1102% ( 23) 00:08:23.523 10183.286 - 10233.698: 93.2025% ( 15) 00:08:23.523 10233.698 - 10284.111: 93.3071% ( 17) 00:08:23.523 10284.111 - 10334.523: 93.4117% ( 17) 00:08:23.523 10334.523 - 10384.935: 93.5101% ( 16) 00:08:23.523 10384.935 - 10435.348: 93.5962% ( 14) 00:08:23.523 10435.348 - 10485.760: 93.7131% ( 19) 00:08:23.523 10485.760 - 10536.172: 93.8238% ( 18) 00:08:23.523 10536.172 - 10586.585: 93.9407% ( 19) 00:08:23.523 10586.585 - 10636.997: 94.0391% ( 16) 00:08:23.523 10636.997 - 10687.409: 94.1437% ( 17) 00:08:23.523 10687.409 - 10737.822: 94.2421% ( 16) 00:08:23.523 10737.822 - 10788.234: 94.3406% ( 16) 00:08:23.523 10788.234 - 10838.646: 94.4205% ( 13) 00:08:23.523 10838.646 - 10889.058: 94.5066% ( 14) 00:08:23.523 10889.058 - 10939.471: 94.5866% ( 13) 00:08:23.523 10939.471 - 10989.883: 94.6727% ( 14) 00:08:23.523 10989.883 - 11040.295: 94.7650% ( 15) 00:08:23.523 11040.295 - 11090.708: 94.8388% ( 12) 00:08:23.523 11090.708 - 11141.120: 94.9126% ( 12) 00:08:23.523 11141.120 - 11191.532: 95.0111% ( 16) 00:08:23.523 11191.532 - 11241.945: 95.0910% ( 13) 00:08:23.523 11241.945 - 11292.357: 95.1772% ( 14) 00:08:23.523 11292.357 - 11342.769: 95.2571% ( 13) 00:08:23.523 11342.769 - 11393.182: 95.3310% ( 12) 00:08:23.523 11393.182 - 11443.594: 95.3986% ( 11) 00:08:23.523 11443.594 - 11494.006: 95.4540% ( 9) 00:08:23.523 11494.006 - 11544.418: 95.4786% ( 4) 00:08:23.523 11544.418 - 11594.831: 95.5032% ( 4) 00:08:23.523 11594.831 - 11645.243: 95.5278% ( 4) 00:08:23.523 11645.243 - 11695.655: 95.5647% ( 6) 00:08:23.523 11695.655 - 11746.068: 95.5893% ( 4) 00:08:23.523 11746.068 - 11796.480: 95.6447% ( 9) 00:08:23.523 11796.480 - 11846.892: 95.6754% ( 5) 00:08:23.523 11846.892 - 11897.305: 95.7431% ( 11) 00:08:23.523 11897.305 - 11947.717: 95.8169% ( 12) 00:08:23.523 11947.717 - 11998.129: 95.8784% ( 10) 00:08:23.523 11998.129 - 12048.542: 95.9461% ( 11) 00:08:23.523 12048.542 - 12098.954: 96.0199% ( 12) 00:08:23.523 12098.954 - 12149.366: 96.0876% ( 11) 00:08:23.523 12149.366 - 12199.778: 96.1307% ( 7) 00:08:23.523 12199.778 - 12250.191: 96.1799% ( 8) 00:08:23.523 12250.191 - 12300.603: 96.2229% ( 7) 00:08:23.523 12300.603 - 12351.015: 96.2968% ( 12) 00:08:23.523 12351.015 - 12401.428: 96.3952% ( 16) 00:08:23.523 12401.428 - 12451.840: 96.4936% ( 16) 00:08:23.523 12451.840 - 12502.252: 96.5736% ( 13) 00:08:23.523 12502.252 - 12552.665: 96.6658% ( 15) 00:08:23.523 12552.665 - 12603.077: 96.7581% ( 15) 00:08:23.523 12603.077 - 12653.489: 96.8442% ( 14) 00:08:23.523 12653.489 - 12703.902: 96.9365% ( 15) 00:08:23.523 12703.902 - 12754.314: 97.0226% ( 14) 00:08:23.523 12754.314 - 12804.726: 97.1149% ( 15) 00:08:23.523 12804.726 - 12855.138: 97.1887% ( 12) 00:08:23.523 12855.138 - 12905.551: 97.2318% ( 7) 00:08:23.523 12905.551 - 13006.375: 97.3856% ( 25) 00:08:23.523 13006.375 - 13107.200: 97.6132% ( 37) 00:08:23.523 13107.200 - 13208.025: 97.8346% ( 36) 00:08:23.523 13208.025 - 13308.849: 98.0623% ( 37) 00:08:23.523 13308.849 - 13409.674: 98.1976% ( 22) 00:08:23.523 13409.674 - 13510.498: 98.3637% ( 27) 00:08:23.523 13510.498 - 13611.323: 98.5236% ( 26) 00:08:23.523 13611.323 - 13712.148: 98.6897% ( 27) 00:08:23.523 13712.148 - 13812.972: 98.8558% ( 27) 00:08:23.523 13812.972 - 13913.797: 98.9911% ( 22) 00:08:23.523 13913.797 - 14014.622: 99.0588% ( 11) 00:08:23.523 14014.622 - 14115.446: 99.1019% ( 7) 00:08:23.523 14115.446 - 14216.271: 99.1388% ( 6) 00:08:23.523 14216.271 - 14317.095: 99.1757% ( 6) 00:08:23.523 14317.095 - 14417.920: 99.2126% ( 6) 00:08:23.523 16736.886 - 16837.711: 99.2311% ( 3) 00:08:23.523 16837.711 - 16938.535: 99.2618% ( 5) 00:08:23.523 16938.535 - 17039.360: 99.2803% ( 3) 00:08:23.523 17039.360 - 17140.185: 99.2926% ( 2) 00:08:23.523 17140.185 - 17241.009: 99.3172% ( 4) 00:08:23.523 17241.009 - 17341.834: 99.3418% ( 4) 00:08:23.523 17341.834 - 17442.658: 99.3602% ( 3) 00:08:23.523 17442.658 - 17543.483: 99.3848% ( 4) 00:08:23.523 17543.483 - 17644.308: 99.4033% ( 3) 00:08:23.523 17644.308 - 17745.132: 99.4218% ( 3) 00:08:23.523 17745.132 - 17845.957: 99.4341% ( 2) 00:08:23.523 17845.957 - 17946.782: 99.4525% ( 3) 00:08:23.523 17946.782 - 18047.606: 99.4710% ( 3) 00:08:23.523 18047.606 - 18148.431: 99.4894% ( 3) 00:08:23.523 18148.431 - 18249.255: 99.5079% ( 3) 00:08:23.523 18249.255 - 18350.080: 99.5263% ( 3) 00:08:23.523 18350.080 - 18450.905: 99.5509% ( 4) 00:08:23.523 18450.905 - 18551.729: 99.5694% ( 3) 00:08:23.523 18551.729 - 18652.554: 99.5878% ( 3) 00:08:23.523 18652.554 - 18753.378: 99.6063% ( 3) 00:08:23.523 26214.400 - 26416.049: 99.6371% ( 5) 00:08:23.523 26416.049 - 26617.698: 99.6740% ( 6) 00:08:23.523 26617.698 - 26819.348: 99.7109% ( 6) 00:08:23.523 26819.348 - 27020.997: 99.7478% ( 6) 00:08:23.523 27020.997 - 27222.646: 99.8216% ( 12) 00:08:23.523 27222.646 - 27424.295: 99.8893% ( 11) 00:08:23.523 27424.295 - 27625.945: 99.9631% ( 12) 00:08:23.523 27625.945 - 27827.594: 100.0000% ( 6) 00:08:23.523 00:08:23.523 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:23.523 ============================================================================== 00:08:23.523 Range in us Cumulative IO count 00:08:23.523 3780.923 - 3806.129: 0.0062% ( 1) 00:08:23.523 3806.129 - 3831.335: 0.0185% ( 2) 00:08:23.523 3831.335 - 3856.542: 0.0431% ( 4) 00:08:23.523 3856.542 - 3881.748: 0.0492% ( 1) 00:08:23.523 3881.748 - 3906.954: 0.0615% ( 2) 00:08:23.523 3906.954 - 3932.160: 0.0677% ( 1) 00:08:23.523 3932.160 - 3957.366: 0.0800% ( 2) 00:08:23.523 3957.366 - 3982.572: 0.0923% ( 2) 00:08:23.523 3982.572 - 4007.778: 0.1046% ( 2) 00:08:23.523 4007.778 - 4032.985: 0.1169% ( 2) 00:08:23.523 4032.985 - 4058.191: 0.1353% ( 3) 00:08:23.523 4058.191 - 4083.397: 0.1538% ( 3) 00:08:23.523 4083.397 - 4108.603: 0.1661% ( 2) 00:08:23.523 4108.603 - 4133.809: 0.1845% ( 3) 00:08:23.523 4133.809 - 4159.015: 0.1969% ( 2) 00:08:23.523 4159.015 - 4184.222: 0.2092% ( 2) 00:08:23.523 4184.222 - 4209.428: 0.2215% ( 2) 00:08:23.523 4209.428 - 4234.634: 0.2399% ( 3) 00:08:23.523 4234.634 - 4259.840: 0.2461% ( 1) 00:08:23.523 4259.840 - 4285.046: 0.2645% ( 3) 00:08:23.523 4285.046 - 4310.252: 0.2768% ( 2) 00:08:23.523 4310.252 - 4335.458: 0.2891% ( 2) 00:08:23.523 4335.458 - 4360.665: 0.3014% ( 2) 00:08:23.523 4360.665 - 4385.871: 0.3137% ( 2) 00:08:23.523 4385.871 - 4411.077: 0.3260% ( 2) 00:08:23.523 4411.077 - 4436.283: 0.3383% ( 2) 00:08:23.523 4436.283 - 4461.489: 0.3506% ( 2) 00:08:23.523 4461.489 - 4486.695: 0.3691% ( 3) 00:08:23.523 4486.695 - 4511.902: 0.3814% ( 2) 00:08:23.523 4511.902 - 4537.108: 0.3937% ( 2) 00:08:23.523 5368.911 - 5394.117: 0.4368% ( 7) 00:08:23.523 5394.117 - 5419.323: 0.4429% ( 1) 00:08:23.523 5419.323 - 5444.529: 0.4491% ( 1) 00:08:23.523 5444.529 - 5469.735: 0.4614% ( 2) 00:08:23.523 5469.735 - 5494.942: 0.4737% ( 2) 00:08:23.523 5494.942 - 5520.148: 0.4921% ( 3) 00:08:23.523 5520.148 - 5545.354: 0.5044% ( 2) 00:08:23.523 5545.354 - 5570.560: 0.5167% ( 2) 00:08:23.523 5570.560 - 5595.766: 0.5290% ( 2) 00:08:23.523 5595.766 - 5620.972: 0.5413% ( 2) 00:08:23.523 5620.972 - 5646.178: 0.5598% ( 3) 00:08:23.523 5646.178 - 5671.385: 0.5721% ( 2) 00:08:23.523 5671.385 - 5696.591: 0.5844% ( 2) 00:08:23.523 5696.591 - 5721.797: 0.5967% ( 2) 00:08:23.523 5721.797 - 5747.003: 0.6090% ( 2) 00:08:23.523 5747.003 - 5772.209: 0.6275% ( 3) 00:08:23.523 5772.209 - 5797.415: 0.6398% ( 2) 00:08:23.523 5797.415 - 5822.622: 0.6521% ( 2) 00:08:23.523 5822.622 - 5847.828: 0.6644% ( 2) 00:08:23.523 5847.828 - 5873.034: 0.6767% ( 2) 00:08:23.523 5873.034 - 5898.240: 0.6951% ( 3) 00:08:23.523 5898.240 - 5923.446: 0.7074% ( 2) 00:08:23.523 5923.446 - 5948.652: 0.7197% ( 2) 00:08:23.523 5948.652 - 5973.858: 0.7320% ( 2) 00:08:23.523 5973.858 - 5999.065: 0.7505% ( 3) 00:08:23.523 5999.065 - 6024.271: 0.7628% ( 2) 00:08:23.523 6024.271 - 6049.477: 0.7751% ( 2) 00:08:23.523 6049.477 - 6074.683: 0.7874% ( 2) 00:08:23.523 6074.683 - 6099.889: 0.8797% ( 15) 00:08:23.523 6099.889 - 6125.095: 1.0150% ( 22) 00:08:23.523 6125.095 - 6150.302: 1.2857% ( 44) 00:08:23.523 6150.302 - 6175.508: 1.8332% ( 89) 00:08:23.523 6175.508 - 6200.714: 2.4729% ( 104) 00:08:23.524 6200.714 - 6225.920: 3.1435% ( 109) 00:08:23.524 6225.920 - 6251.126: 3.6725% ( 86) 00:08:23.524 6251.126 - 6276.332: 4.2753% ( 98) 00:08:23.524 6276.332 - 6301.538: 4.8105% ( 87) 00:08:23.524 6301.538 - 6326.745: 5.6471% ( 136) 00:08:23.524 6326.745 - 6351.951: 6.6375% ( 161) 00:08:23.524 6351.951 - 6377.157: 7.7571% ( 182) 00:08:23.524 6377.157 - 6402.363: 8.7475% ( 161) 00:08:23.524 6402.363 - 6427.569: 9.8364% ( 177) 00:08:23.524 6427.569 - 6452.775: 11.0421% ( 196) 00:08:23.524 6452.775 - 6503.188: 13.7426% ( 439) 00:08:23.524 6503.188 - 6553.600: 16.3263% ( 420) 00:08:23.524 6553.600 - 6604.012: 19.1006% ( 451) 00:08:23.524 6604.012 - 6654.425: 22.0719% ( 483) 00:08:23.524 6654.425 - 6704.837: 25.1292% ( 497) 00:08:23.524 6704.837 - 6755.249: 28.0081% ( 468) 00:08:23.524 6755.249 - 6805.662: 31.0285% ( 491) 00:08:23.524 6805.662 - 6856.074: 34.0305% ( 488) 00:08:23.524 6856.074 - 6906.486: 37.0571% ( 492) 00:08:23.524 6906.486 - 6956.898: 40.0529% ( 487) 00:08:23.524 6956.898 - 7007.311: 43.1102% ( 497) 00:08:23.524 7007.311 - 7057.723: 45.8415% ( 444) 00:08:23.524 7057.723 - 7108.135: 48.2530% ( 392) 00:08:23.524 7108.135 - 7158.548: 50.4552% ( 358) 00:08:23.524 7158.548 - 7208.960: 52.5960% ( 348) 00:08:23.524 7208.960 - 7259.372: 54.3615% ( 287) 00:08:23.524 7259.372 - 7309.785: 55.7763% ( 230) 00:08:23.524 7309.785 - 7360.197: 56.9328% ( 188) 00:08:23.524 7360.197 - 7410.609: 57.8556% ( 150) 00:08:23.524 7410.609 - 7461.022: 58.7229% ( 141) 00:08:23.524 7461.022 - 7511.434: 59.4611% ( 120) 00:08:23.524 7511.434 - 7561.846: 60.0148% ( 90) 00:08:23.524 7561.846 - 7612.258: 60.5069% ( 80) 00:08:23.524 7612.258 - 7662.671: 61.0790% ( 93) 00:08:23.524 7662.671 - 7713.083: 61.7003% ( 101) 00:08:23.524 7713.083 - 7763.495: 62.2847% ( 95) 00:08:23.524 7763.495 - 7813.908: 62.8322% ( 89) 00:08:23.524 7813.908 - 7864.320: 63.3858% ( 90) 00:08:23.524 7864.320 - 7914.732: 63.9149% ( 86) 00:08:23.524 7914.732 - 7965.145: 64.3885% ( 77) 00:08:23.524 7965.145 - 8015.557: 64.9114% ( 85) 00:08:23.524 8015.557 - 8065.969: 65.3666% ( 74) 00:08:23.524 8065.969 - 8116.382: 65.8711% ( 82) 00:08:23.524 8116.382 - 8166.794: 66.3201% ( 73) 00:08:23.524 8166.794 - 8217.206: 66.8922% ( 93) 00:08:23.524 8217.206 - 8267.618: 67.3659% ( 77) 00:08:23.524 8267.618 - 8318.031: 68.0057% ( 104) 00:08:23.524 8318.031 - 8368.443: 68.7131% ( 115) 00:08:23.524 8368.443 - 8418.855: 69.4574% ( 121) 00:08:23.524 8418.855 - 8469.268: 70.1710% ( 116) 00:08:23.524 8469.268 - 8519.680: 70.9830% ( 132) 00:08:23.524 8519.680 - 8570.092: 71.8750% ( 145) 00:08:23.524 8570.092 - 8620.505: 72.8162% ( 153) 00:08:23.524 8620.505 - 8670.917: 73.7389% ( 150) 00:08:23.524 8670.917 - 8721.329: 74.7232% ( 160) 00:08:23.524 8721.329 - 8771.742: 75.7382% ( 165) 00:08:23.524 8771.742 - 8822.154: 76.7532% ( 165) 00:08:23.524 8822.154 - 8872.566: 77.8051% ( 171) 00:08:23.524 8872.566 - 8922.978: 78.8201% ( 165) 00:08:23.524 8922.978 - 8973.391: 79.9151% ( 178) 00:08:23.524 8973.391 - 9023.803: 80.9855% ( 174) 00:08:23.524 9023.803 - 9074.215: 82.1235% ( 185) 00:08:23.524 9074.215 - 9124.628: 83.1385% ( 165) 00:08:23.524 9124.628 - 9175.040: 84.1351% ( 162) 00:08:23.524 9175.040 - 9225.452: 85.1255% ( 161) 00:08:23.524 9225.452 - 9275.865: 86.0236% ( 146) 00:08:23.524 9275.865 - 9326.277: 86.9156% ( 145) 00:08:23.524 9326.277 - 9376.689: 87.6784% ( 124) 00:08:23.524 9376.689 - 9427.102: 88.4104% ( 119) 00:08:23.524 9427.102 - 9477.514: 89.0194% ( 99) 00:08:23.524 9477.514 - 9527.926: 89.5423% ( 85) 00:08:23.524 9527.926 - 9578.338: 90.0283% ( 79) 00:08:23.524 9578.338 - 9628.751: 90.4589% ( 70) 00:08:23.524 9628.751 - 9679.163: 90.8526% ( 64) 00:08:23.524 9679.163 - 9729.575: 91.2032% ( 57) 00:08:23.524 9729.575 - 9779.988: 91.5231% ( 52) 00:08:23.524 9779.988 - 9830.400: 91.7815% ( 42) 00:08:23.524 9830.400 - 9880.812: 92.0276% ( 40) 00:08:23.524 9880.812 - 9931.225: 92.2552% ( 37) 00:08:23.524 9931.225 - 9981.637: 92.4028% ( 24) 00:08:23.524 9981.637 - 10032.049: 92.5750% ( 28) 00:08:23.524 10032.049 - 10082.462: 92.7411% ( 27) 00:08:23.524 10082.462 - 10132.874: 92.9441% ( 33) 00:08:23.524 10132.874 - 10183.286: 93.1164% ( 28) 00:08:23.524 10183.286 - 10233.698: 93.2763% ( 26) 00:08:23.524 10233.698 - 10284.111: 93.3994% ( 20) 00:08:23.524 10284.111 - 10334.523: 93.5470% ( 24) 00:08:23.524 10334.523 - 10384.935: 93.6946% ( 24) 00:08:23.524 10384.935 - 10435.348: 93.8238% ( 21) 00:08:23.524 10435.348 - 10485.760: 93.9469% ( 20) 00:08:23.524 10485.760 - 10536.172: 94.0699% ( 20) 00:08:23.524 10536.172 - 10586.585: 94.1991% ( 21) 00:08:23.524 10586.585 - 10636.997: 94.3159% ( 19) 00:08:23.524 10636.997 - 10687.409: 94.4267% ( 18) 00:08:23.524 10687.409 - 10737.822: 94.5620% ( 22) 00:08:23.524 10737.822 - 10788.234: 94.6604% ( 16) 00:08:23.524 10788.234 - 10838.646: 94.7650% ( 17) 00:08:23.524 10838.646 - 10889.058: 94.8573% ( 15) 00:08:23.524 10889.058 - 10939.471: 94.9496% ( 15) 00:08:23.524 10939.471 - 10989.883: 95.0172% ( 11) 00:08:23.524 10989.883 - 11040.295: 95.0849% ( 11) 00:08:23.524 11040.295 - 11090.708: 95.1403% ( 9) 00:08:23.524 11090.708 - 11141.120: 95.1956% ( 9) 00:08:23.524 11141.120 - 11191.532: 95.2387% ( 7) 00:08:23.524 11191.532 - 11241.945: 95.2817% ( 7) 00:08:23.524 11241.945 - 11292.357: 95.3187% ( 6) 00:08:23.524 11292.357 - 11342.769: 95.3494% ( 5) 00:08:23.524 11342.769 - 11393.182: 95.3802% ( 5) 00:08:23.524 11393.182 - 11443.594: 95.4478% ( 11) 00:08:23.524 11443.594 - 11494.006: 95.5032% ( 9) 00:08:23.524 11494.006 - 11544.418: 95.5586% ( 9) 00:08:23.524 11544.418 - 11594.831: 95.6262% ( 11) 00:08:23.524 11594.831 - 11645.243: 95.7124% ( 14) 00:08:23.524 11645.243 - 11695.655: 95.7985% ( 14) 00:08:23.524 11695.655 - 11746.068: 95.8600% ( 10) 00:08:23.524 11746.068 - 11796.480: 95.9523% ( 15) 00:08:23.524 11796.480 - 11846.892: 96.0384% ( 14) 00:08:23.524 11846.892 - 11897.305: 96.1061% ( 11) 00:08:23.524 11897.305 - 11947.717: 96.1737% ( 11) 00:08:23.524 11947.717 - 11998.129: 96.2352% ( 10) 00:08:23.524 11998.129 - 12048.542: 96.3091% ( 12) 00:08:23.524 12048.542 - 12098.954: 96.3767% ( 11) 00:08:23.524 12098.954 - 12149.366: 96.4505% ( 12) 00:08:23.524 12149.366 - 12199.778: 96.5121% ( 10) 00:08:23.524 12199.778 - 12250.191: 96.5859% ( 12) 00:08:23.524 12250.191 - 12300.603: 96.6597% ( 12) 00:08:23.524 12300.603 - 12351.015: 96.7212% ( 10) 00:08:23.524 12351.015 - 12401.428: 96.7827% ( 10) 00:08:23.524 12401.428 - 12451.840: 96.8135% ( 5) 00:08:23.524 12451.840 - 12502.252: 96.8504% ( 6) 00:08:23.524 12502.252 - 12552.665: 96.8812% ( 5) 00:08:23.524 12552.665 - 12603.077: 96.9242% ( 7) 00:08:23.524 12603.077 - 12653.489: 96.9611% ( 6) 00:08:23.524 12653.489 - 12703.902: 96.9980% ( 6) 00:08:23.524 12703.902 - 12754.314: 97.0534% ( 9) 00:08:23.524 12754.314 - 12804.726: 97.1764% ( 20) 00:08:23.524 12804.726 - 12855.138: 97.2564% ( 13) 00:08:23.524 12855.138 - 12905.551: 97.3056% ( 8) 00:08:23.524 12905.551 - 13006.375: 97.3856% ( 13) 00:08:23.524 13006.375 - 13107.200: 97.5947% ( 34) 00:08:23.525 13107.200 - 13208.025: 97.8162% ( 36) 00:08:23.525 13208.025 - 13308.849: 98.0130% ( 32) 00:08:23.525 13308.849 - 13409.674: 98.2160% ( 33) 00:08:23.525 13409.674 - 13510.498: 98.4375% ( 36) 00:08:23.525 13510.498 - 13611.323: 98.6097% ( 28) 00:08:23.525 13611.323 - 13712.148: 98.7328% ( 20) 00:08:23.525 13712.148 - 13812.972: 98.8558% ( 20) 00:08:23.525 13812.972 - 13913.797: 98.9727% ( 19) 00:08:23.525 13913.797 - 14014.622: 99.0527% ( 13) 00:08:23.525 14014.622 - 14115.446: 99.0896% ( 6) 00:08:23.525 14115.446 - 14216.271: 99.1265% ( 6) 00:08:23.525 14216.271 - 14317.095: 99.1634% ( 6) 00:08:23.525 14317.095 - 14417.920: 99.2064% ( 7) 00:08:23.525 14417.920 - 14518.745: 99.2126% ( 1) 00:08:23.525 16837.711 - 16938.535: 99.2311% ( 3) 00:08:23.525 16938.535 - 17039.360: 99.2495% ( 3) 00:08:23.525 17039.360 - 17140.185: 99.2680% ( 3) 00:08:23.525 17140.185 - 17241.009: 99.2864% ( 3) 00:08:23.525 17241.009 - 17341.834: 99.3049% ( 3) 00:08:23.525 17341.834 - 17442.658: 99.3295% ( 4) 00:08:23.525 17442.658 - 17543.483: 99.3541% ( 4) 00:08:23.525 17543.483 - 17644.308: 99.3725% ( 3) 00:08:23.525 17644.308 - 17745.132: 99.3848% ( 2) 00:08:23.525 17745.132 - 17845.957: 99.4033% ( 3) 00:08:23.525 17845.957 - 17946.782: 99.4279% ( 4) 00:08:23.525 17946.782 - 18047.606: 99.4464% ( 3) 00:08:23.525 18047.606 - 18148.431: 99.4648% ( 3) 00:08:23.525 18148.431 - 18249.255: 99.4833% ( 3) 00:08:23.525 18249.255 - 18350.080: 99.5079% ( 4) 00:08:23.525 18350.080 - 18450.905: 99.5263% ( 3) 00:08:23.525 18450.905 - 18551.729: 99.5448% ( 3) 00:08:23.525 18551.729 - 18652.554: 99.5694% ( 4) 00:08:23.525 18652.554 - 18753.378: 99.5878% ( 3) 00:08:23.525 18753.378 - 18854.203: 99.6001% ( 2) 00:08:23.525 18854.203 - 18955.028: 99.6063% ( 1) 00:08:23.525 25811.102 - 26012.751: 99.6678% ( 10) 00:08:23.525 26012.751 - 26214.400: 99.7416% ( 12) 00:08:23.525 26214.400 - 26416.049: 99.8031% ( 10) 00:08:23.525 26416.049 - 26617.698: 99.8770% ( 12) 00:08:23.525 26617.698 - 26819.348: 99.9508% ( 12) 00:08:23.525 26819.348 - 27020.997: 100.0000% ( 8) 00:08:23.525 00:08:23.525 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:23.525 ============================================================================== 00:08:23.525 Range in us Cumulative IO count 00:08:23.525 3629.686 - 3654.892: 0.0062% ( 1) 00:08:23.525 3654.892 - 3680.098: 0.0185% ( 2) 00:08:23.525 3680.098 - 3705.305: 0.0615% ( 7) 00:08:23.525 3705.305 - 3730.511: 0.0861% ( 4) 00:08:23.525 3730.511 - 3755.717: 0.0923% ( 1) 00:08:23.525 3755.717 - 3780.923: 0.0984% ( 1) 00:08:23.525 3780.923 - 3806.129: 0.1169% ( 3) 00:08:23.525 3831.335 - 3856.542: 0.1230% ( 1) 00:08:23.525 3856.542 - 3881.748: 0.1353% ( 2) 00:08:23.525 3881.748 - 3906.954: 0.1476% ( 2) 00:08:23.525 3906.954 - 3932.160: 0.1661% ( 3) 00:08:23.525 3932.160 - 3957.366: 0.1845% ( 3) 00:08:23.525 3957.366 - 3982.572: 0.1969% ( 2) 00:08:23.525 3982.572 - 4007.778: 0.2092% ( 2) 00:08:23.525 4007.778 - 4032.985: 0.2215% ( 2) 00:08:23.525 4032.985 - 4058.191: 0.2399% ( 3) 00:08:23.525 4058.191 - 4083.397: 0.2522% ( 2) 00:08:23.525 4083.397 - 4108.603: 0.2584% ( 1) 00:08:23.525 4108.603 - 4133.809: 0.2707% ( 2) 00:08:23.525 4133.809 - 4159.015: 0.2830% ( 2) 00:08:23.525 4159.015 - 4184.222: 0.2953% ( 2) 00:08:23.525 4184.222 - 4209.428: 0.3076% ( 2) 00:08:23.525 4209.428 - 4234.634: 0.3260% ( 3) 00:08:23.525 4234.634 - 4259.840: 0.3322% ( 1) 00:08:23.525 4259.840 - 4285.046: 0.3445% ( 2) 00:08:23.525 4285.046 - 4310.252: 0.3568% ( 2) 00:08:23.525 4310.252 - 4335.458: 0.3752% ( 3) 00:08:23.525 4335.458 - 4360.665: 0.3875% ( 2) 00:08:23.525 4360.665 - 4385.871: 0.3937% ( 1) 00:08:23.525 5116.849 - 5142.055: 0.4122% ( 3) 00:08:23.525 5142.055 - 5167.262: 0.4306% ( 3) 00:08:23.525 5167.262 - 5192.468: 0.4491% ( 3) 00:08:23.525 5192.468 - 5217.674: 0.4614% ( 2) 00:08:23.525 5217.674 - 5242.880: 0.4675% ( 1) 00:08:23.525 5242.880 - 5268.086: 0.4798% ( 2) 00:08:23.525 5268.086 - 5293.292: 0.4921% ( 2) 00:08:23.525 5293.292 - 5318.498: 0.5106% ( 3) 00:08:23.525 5318.498 - 5343.705: 0.5167% ( 1) 00:08:23.525 5343.705 - 5368.911: 0.5352% ( 3) 00:08:23.525 5368.911 - 5394.117: 0.5475% ( 2) 00:08:23.525 5394.117 - 5419.323: 0.5659% ( 3) 00:08:23.525 5419.323 - 5444.529: 0.5782% ( 2) 00:08:23.525 5444.529 - 5469.735: 0.5906% ( 2) 00:08:23.525 5469.735 - 5494.942: 0.6090% ( 3) 00:08:23.525 5494.942 - 5520.148: 0.6213% ( 2) 00:08:23.525 5520.148 - 5545.354: 0.6336% ( 2) 00:08:23.525 5545.354 - 5570.560: 0.6459% ( 2) 00:08:23.525 5570.560 - 5595.766: 0.6582% ( 2) 00:08:23.525 5595.766 - 5620.972: 0.6767% ( 3) 00:08:23.525 5620.972 - 5646.178: 0.6890% ( 2) 00:08:23.525 5646.178 - 5671.385: 0.7013% ( 2) 00:08:23.525 5671.385 - 5696.591: 0.7136% ( 2) 00:08:23.525 5696.591 - 5721.797: 0.7259% ( 2) 00:08:23.525 5721.797 - 5747.003: 0.7382% ( 2) 00:08:23.525 5747.003 - 5772.209: 0.7505% ( 2) 00:08:23.525 5772.209 - 5797.415: 0.7628% ( 2) 00:08:23.525 5797.415 - 5822.622: 0.7751% ( 2) 00:08:23.525 5822.622 - 5847.828: 0.7874% ( 2) 00:08:23.525 6049.477 - 6074.683: 0.8182% ( 5) 00:08:23.525 6074.683 - 6099.889: 0.9104% ( 15) 00:08:23.525 6099.889 - 6125.095: 1.1503% ( 39) 00:08:23.525 6125.095 - 6150.302: 1.4702% ( 52) 00:08:23.525 6150.302 - 6175.508: 1.9316% ( 75) 00:08:23.525 6175.508 - 6200.714: 2.4852% ( 90) 00:08:23.525 6200.714 - 6225.920: 3.0942% ( 99) 00:08:23.525 6225.920 - 6251.126: 3.7340% ( 104) 00:08:23.525 6251.126 - 6276.332: 4.4107% ( 110) 00:08:23.525 6276.332 - 6301.538: 5.0874% ( 110) 00:08:23.525 6301.538 - 6326.745: 5.7702% ( 111) 00:08:23.525 6326.745 - 6351.951: 6.5637% ( 129) 00:08:23.525 6351.951 - 6377.157: 7.7387% ( 191) 00:08:23.525 6377.157 - 6402.363: 8.7783% ( 169) 00:08:23.525 6402.363 - 6427.569: 9.9594% ( 192) 00:08:23.525 6427.569 - 6452.775: 11.0667% ( 180) 00:08:23.525 6452.775 - 6503.188: 13.7303% ( 433) 00:08:23.525 6503.188 - 6553.600: 16.4309% ( 439) 00:08:23.525 6553.600 - 6604.012: 19.3467% ( 474) 00:08:23.525 6604.012 - 6654.425: 22.3118% ( 482) 00:08:23.525 6654.425 - 6704.837: 25.2215% ( 473) 00:08:23.525 6704.837 - 6755.249: 28.1065% ( 469) 00:08:23.525 6755.249 - 6805.662: 31.0285% ( 475) 00:08:23.525 6805.662 - 6856.074: 33.9936% ( 482) 00:08:23.525 6856.074 - 6906.486: 37.0263% ( 493) 00:08:23.525 6906.486 - 6956.898: 40.0529% ( 492) 00:08:23.525 6956.898 - 7007.311: 43.0364% ( 485) 00:08:23.525 7007.311 - 7057.723: 45.7677% ( 444) 00:08:23.525 7057.723 - 7108.135: 48.0623% ( 373) 00:08:23.525 7108.135 - 7158.548: 50.2030% ( 348) 00:08:23.525 7158.548 - 7208.960: 52.2392% ( 331) 00:08:23.525 7208.960 - 7259.372: 54.0662% ( 297) 00:08:23.525 7259.372 - 7309.785: 55.4257% ( 221) 00:08:23.525 7309.785 - 7360.197: 56.5699% ( 186) 00:08:23.525 7360.197 - 7410.609: 57.4803% ( 148) 00:08:23.525 7410.609 - 7461.022: 58.3108% ( 135) 00:08:23.525 7461.022 - 7511.434: 59.0428% ( 119) 00:08:23.525 7511.434 - 7561.846: 59.6641% ( 101) 00:08:23.525 7561.846 - 7612.258: 60.2793% ( 100) 00:08:23.525 7612.258 - 7662.671: 60.7899% ( 83) 00:08:23.525 7662.671 - 7713.083: 61.3620% ( 93) 00:08:23.525 7713.083 - 7763.495: 61.8971% ( 87) 00:08:23.525 7763.495 - 7813.908: 62.4692% ( 93) 00:08:23.525 7813.908 - 7864.320: 62.9798% ( 83) 00:08:23.525 7864.320 - 7914.732: 63.5027% ( 85) 00:08:23.525 7914.732 - 7965.145: 64.0194% ( 84) 00:08:23.525 7965.145 - 8015.557: 64.5177% ( 81) 00:08:23.525 8015.557 - 8065.969: 64.9914% ( 77) 00:08:23.525 8065.969 - 8116.382: 65.5327% ( 88) 00:08:23.525 8116.382 - 8166.794: 66.0495% ( 84) 00:08:23.525 8166.794 - 8217.206: 66.6462% ( 97) 00:08:23.525 8217.206 - 8267.618: 67.2121% ( 92) 00:08:23.525 8267.618 - 8318.031: 67.8088% ( 97) 00:08:23.525 8318.031 - 8368.443: 68.4916% ( 111) 00:08:23.525 8368.443 - 8418.855: 69.2421% ( 122) 00:08:23.525 8418.855 - 8469.268: 70.0357% ( 129) 00:08:23.525 8469.268 - 8519.680: 70.8600% ( 134) 00:08:23.525 8519.680 - 8570.092: 71.7766% ( 149) 00:08:23.525 8570.092 - 8620.505: 72.7731% ( 162) 00:08:23.525 8620.505 - 8670.917: 73.8066% ( 168) 00:08:23.525 8670.917 - 8721.329: 74.8401% ( 168) 00:08:23.525 8721.329 - 8771.742: 75.8735% ( 168) 00:08:23.525 8771.742 - 8822.154: 76.9624% ( 177) 00:08:23.525 8822.154 - 8872.566: 78.0881% ( 183) 00:08:23.525 8872.566 - 8922.978: 79.2323% ( 186) 00:08:23.525 8922.978 - 8973.391: 80.3396% ( 180) 00:08:23.525 8973.391 - 9023.803: 81.4530% ( 181) 00:08:23.525 9023.803 - 9074.215: 82.5541% ( 179) 00:08:23.525 9074.215 - 9124.628: 83.5507% ( 162) 00:08:23.525 9124.628 - 9175.040: 84.5965% ( 170) 00:08:23.525 9175.040 - 9225.452: 85.5623% ( 157) 00:08:23.525 9225.452 - 9275.865: 86.4112% ( 138) 00:08:23.526 9275.865 - 9326.277: 87.2355% ( 134) 00:08:23.526 9326.277 - 9376.689: 88.0106% ( 126) 00:08:23.526 9376.689 - 9427.102: 88.6996% ( 112) 00:08:23.526 9427.102 - 9477.514: 89.2963% ( 97) 00:08:23.526 9477.514 - 9527.926: 89.8130% ( 84) 00:08:23.526 9527.926 - 9578.338: 90.2682% ( 74) 00:08:23.526 9578.338 - 9628.751: 90.6558% ( 63) 00:08:23.526 9628.751 - 9679.163: 91.0064% ( 57) 00:08:23.526 9679.163 - 9729.575: 91.3201% ( 51) 00:08:23.526 9729.575 - 9779.988: 91.6339% ( 51) 00:08:23.526 9779.988 - 9830.400: 91.9045% ( 44) 00:08:23.526 9830.400 - 9880.812: 92.1629% ( 42) 00:08:23.526 9880.812 - 9931.225: 92.3536% ( 31) 00:08:23.526 9931.225 - 9981.637: 92.5258% ( 28) 00:08:23.526 9981.637 - 10032.049: 92.7288% ( 33) 00:08:23.526 10032.049 - 10082.462: 92.9072% ( 29) 00:08:23.526 10082.462 - 10132.874: 93.0364% ( 21) 00:08:23.526 10132.874 - 10183.286: 93.1718% ( 22) 00:08:23.526 10183.286 - 10233.698: 93.2948% ( 20) 00:08:23.526 10233.698 - 10284.111: 93.4301% ( 22) 00:08:23.526 10284.111 - 10334.523: 93.5470% ( 19) 00:08:23.526 10334.523 - 10384.935: 93.7069% ( 26) 00:08:23.526 10384.935 - 10435.348: 93.8423% ( 22) 00:08:23.526 10435.348 - 10485.760: 93.9530% ( 18) 00:08:23.526 10485.760 - 10536.172: 94.0699% ( 19) 00:08:23.526 10536.172 - 10586.585: 94.1745% ( 17) 00:08:23.526 10586.585 - 10636.997: 94.3098% ( 22) 00:08:23.526 10636.997 - 10687.409: 94.4636% ( 25) 00:08:23.526 10687.409 - 10737.822: 94.5866% ( 20) 00:08:23.526 10737.822 - 10788.234: 94.6604% ( 12) 00:08:23.526 10788.234 - 10838.646: 94.7466% ( 14) 00:08:23.526 10838.646 - 10889.058: 94.8142% ( 11) 00:08:23.526 10889.058 - 10939.471: 94.8696% ( 9) 00:08:23.526 10939.471 - 10989.883: 94.9434% ( 12) 00:08:23.526 10989.883 - 11040.295: 95.0234% ( 13) 00:08:23.526 11040.295 - 11090.708: 95.1033% ( 13) 00:08:23.526 11090.708 - 11141.120: 95.1710% ( 11) 00:08:23.526 11141.120 - 11191.532: 95.2448% ( 12) 00:08:23.526 11191.532 - 11241.945: 95.3125% ( 11) 00:08:23.526 11241.945 - 11292.357: 95.3679% ( 9) 00:08:23.526 11292.357 - 11342.769: 95.4355% ( 11) 00:08:23.526 11342.769 - 11393.182: 95.5155% ( 13) 00:08:23.526 11393.182 - 11443.594: 95.5955% ( 13) 00:08:23.526 11443.594 - 11494.006: 95.6754% ( 13) 00:08:23.526 11494.006 - 11544.418: 95.7308% ( 9) 00:08:23.526 11544.418 - 11594.831: 95.7800% ( 8) 00:08:23.526 11594.831 - 11645.243: 95.8354% ( 9) 00:08:23.526 11645.243 - 11695.655: 95.9092% ( 12) 00:08:23.526 11695.655 - 11746.068: 95.9830% ( 12) 00:08:23.526 11746.068 - 11796.480: 96.0630% ( 13) 00:08:23.526 11796.480 - 11846.892: 96.1307% ( 11) 00:08:23.526 11846.892 - 11897.305: 96.2045% ( 12) 00:08:23.526 11897.305 - 11947.717: 96.2783% ( 12) 00:08:23.526 11947.717 - 11998.129: 96.3583% ( 13) 00:08:23.526 11998.129 - 12048.542: 96.4444% ( 14) 00:08:23.526 12048.542 - 12098.954: 96.5059% ( 10) 00:08:23.526 12098.954 - 12149.366: 96.5859% ( 13) 00:08:23.526 12149.366 - 12199.778: 96.6597% ( 12) 00:08:23.526 12199.778 - 12250.191: 96.7458% ( 14) 00:08:23.526 12250.191 - 12300.603: 96.8319% ( 14) 00:08:23.526 12300.603 - 12351.015: 96.8935% ( 10) 00:08:23.526 12351.015 - 12401.428: 96.9611% ( 11) 00:08:23.526 12401.428 - 12451.840: 97.0288% ( 11) 00:08:23.526 12451.840 - 12502.252: 97.0842% ( 9) 00:08:23.526 12502.252 - 12552.665: 97.1334% ( 8) 00:08:23.526 12552.665 - 12603.077: 97.1887% ( 9) 00:08:23.526 12603.077 - 12653.489: 97.2318% ( 7) 00:08:23.526 12653.489 - 12703.902: 97.2872% ( 9) 00:08:23.526 12703.902 - 12754.314: 97.3671% ( 13) 00:08:23.526 12754.314 - 12804.726: 97.4409% ( 12) 00:08:23.526 12804.726 - 12855.138: 97.5025% ( 10) 00:08:23.526 12855.138 - 12905.551: 97.5763% ( 12) 00:08:23.526 12905.551 - 13006.375: 97.7116% ( 22) 00:08:23.526 13006.375 - 13107.200: 97.8285% ( 19) 00:08:23.526 13107.200 - 13208.025: 97.9700% ( 23) 00:08:23.526 13208.025 - 13308.849: 98.1115% ( 23) 00:08:23.526 13308.849 - 13409.674: 98.2776% ( 27) 00:08:23.526 13409.674 - 13510.498: 98.4437% ( 27) 00:08:23.526 13510.498 - 13611.323: 98.6344% ( 31) 00:08:23.526 13611.323 - 13712.148: 98.7758% ( 23) 00:08:23.526 13712.148 - 13812.972: 98.8804% ( 17) 00:08:23.526 13812.972 - 13913.797: 98.9727% ( 15) 00:08:23.526 13913.797 - 14014.622: 99.0465% ( 12) 00:08:23.526 14014.622 - 14115.446: 99.1019% ( 9) 00:08:23.526 14115.446 - 14216.271: 99.1388% ( 6) 00:08:23.526 14216.271 - 14317.095: 99.1757% ( 6) 00:08:23.526 14317.095 - 14417.920: 99.2126% ( 6) 00:08:23.526 17039.360 - 17140.185: 99.2372% ( 4) 00:08:23.526 17140.185 - 17241.009: 99.2495% ( 2) 00:08:23.526 17241.009 - 17341.834: 99.2741% ( 4) 00:08:23.526 17341.834 - 17442.658: 99.2987% ( 4) 00:08:23.526 17442.658 - 17543.483: 99.3110% ( 2) 00:08:23.526 17543.483 - 17644.308: 99.3356% ( 4) 00:08:23.526 17644.308 - 17745.132: 99.3602% ( 4) 00:08:23.526 17745.132 - 17845.957: 99.3725% ( 2) 00:08:23.526 17845.957 - 17946.782: 99.3971% ( 4) 00:08:23.526 17946.782 - 18047.606: 99.4156% ( 3) 00:08:23.526 18047.606 - 18148.431: 99.4341% ( 3) 00:08:23.526 18148.431 - 18249.255: 99.4525% ( 3) 00:08:23.526 18249.255 - 18350.080: 99.4833% ( 5) 00:08:23.526 18350.080 - 18450.905: 99.5079% ( 4) 00:08:23.526 18450.905 - 18551.729: 99.5263% ( 3) 00:08:23.526 18551.729 - 18652.554: 99.5448% ( 3) 00:08:23.526 18652.554 - 18753.378: 99.5632% ( 3) 00:08:23.526 18753.378 - 18854.203: 99.5817% ( 3) 00:08:23.526 18854.203 - 18955.028: 99.5940% ( 2) 00:08:23.526 18955.028 - 19055.852: 99.6063% ( 2) 00:08:23.526 25004.505 - 25105.329: 99.6309% ( 4) 00:08:23.526 25105.329 - 25206.154: 99.6678% ( 6) 00:08:23.526 25206.154 - 25306.978: 99.6986% ( 5) 00:08:23.526 25306.978 - 25407.803: 99.7355% ( 6) 00:08:23.526 25407.803 - 25508.628: 99.7662% ( 5) 00:08:23.526 25508.628 - 25609.452: 99.8031% ( 6) 00:08:23.526 25609.452 - 25710.277: 99.8339% ( 5) 00:08:23.526 25710.277 - 25811.102: 99.8708% ( 6) 00:08:23.526 25811.102 - 26012.751: 99.9446% ( 12) 00:08:23.526 26012.751 - 26214.400: 100.0000% ( 9) 00:08:23.526 00:08:23.526 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:23.526 ============================================================================== 00:08:23.526 Range in us Cumulative IO count 00:08:23.526 3478.449 - 3503.655: 0.0062% ( 1) 00:08:23.526 3503.655 - 3528.862: 0.0615% ( 9) 00:08:23.526 3528.862 - 3554.068: 0.0861% ( 4) 00:08:23.526 3554.068 - 3579.274: 0.0923% ( 1) 00:08:23.526 3604.480 - 3629.686: 0.0984% ( 1) 00:08:23.526 3629.686 - 3654.892: 0.1107% ( 2) 00:08:23.526 3654.892 - 3680.098: 0.1230% ( 2) 00:08:23.526 3680.098 - 3705.305: 0.1353% ( 2) 00:08:23.526 3705.305 - 3730.511: 0.1476% ( 2) 00:08:23.526 3730.511 - 3755.717: 0.1599% ( 2) 00:08:23.526 3755.717 - 3780.923: 0.1722% ( 2) 00:08:23.526 3780.923 - 3806.129: 0.1845% ( 2) 00:08:23.526 3806.129 - 3831.335: 0.1969% ( 2) 00:08:23.526 3831.335 - 3856.542: 0.2092% ( 2) 00:08:23.526 3856.542 - 3881.748: 0.2276% ( 3) 00:08:23.526 3881.748 - 3906.954: 0.2399% ( 2) 00:08:23.526 3906.954 - 3932.160: 0.2522% ( 2) 00:08:23.526 3932.160 - 3957.366: 0.2645% ( 2) 00:08:23.526 3957.366 - 3982.572: 0.2768% ( 2) 00:08:23.526 3982.572 - 4007.778: 0.2891% ( 2) 00:08:23.526 4007.778 - 4032.985: 0.3076% ( 3) 00:08:23.526 4032.985 - 4058.191: 0.3199% ( 2) 00:08:23.526 4058.191 - 4083.397: 0.3322% ( 2) 00:08:23.526 4083.397 - 4108.603: 0.3445% ( 2) 00:08:23.526 4108.603 - 4133.809: 0.3568% ( 2) 00:08:23.526 4133.809 - 4159.015: 0.3691% ( 2) 00:08:23.526 4159.015 - 4184.222: 0.3875% ( 3) 00:08:23.526 4184.222 - 4209.428: 0.3937% ( 1) 00:08:23.526 4940.406 - 4965.612: 0.3999% ( 1) 00:08:23.526 4965.612 - 4990.818: 0.4552% ( 9) 00:08:23.526 4990.818 - 5016.025: 0.4737% ( 3) 00:08:23.526 5016.025 - 5041.231: 0.4798% ( 1) 00:08:23.526 5066.437 - 5091.643: 0.4921% ( 2) 00:08:23.526 5091.643 - 5116.849: 0.5044% ( 2) 00:08:23.526 5116.849 - 5142.055: 0.5167% ( 2) 00:08:23.526 5142.055 - 5167.262: 0.5290% ( 2) 00:08:23.526 5167.262 - 5192.468: 0.5475% ( 3) 00:08:23.526 5192.468 - 5217.674: 0.5598% ( 2) 00:08:23.526 5217.674 - 5242.880: 0.5721% ( 2) 00:08:23.526 5242.880 - 5268.086: 0.5844% ( 2) 00:08:23.526 5268.086 - 5293.292: 0.5967% ( 2) 00:08:23.526 5293.292 - 5318.498: 0.6090% ( 2) 00:08:23.526 5318.498 - 5343.705: 0.6275% ( 3) 00:08:23.526 5343.705 - 5368.911: 0.6398% ( 2) 00:08:23.526 5368.911 - 5394.117: 0.6521% ( 2) 00:08:23.526 5394.117 - 5419.323: 0.6644% ( 2) 00:08:23.526 5419.323 - 5444.529: 0.6767% ( 2) 00:08:23.526 5444.529 - 5469.735: 0.6890% ( 2) 00:08:23.527 5469.735 - 5494.942: 0.7013% ( 2) 00:08:23.527 5494.942 - 5520.148: 0.7197% ( 3) 00:08:23.527 5520.148 - 5545.354: 0.7259% ( 1) 00:08:23.527 5545.354 - 5570.560: 0.7382% ( 2) 00:08:23.527 5570.560 - 5595.766: 0.7566% ( 3) 00:08:23.527 5595.766 - 5620.972: 0.7689% ( 2) 00:08:23.527 5620.972 - 5646.178: 0.7812% ( 2) 00:08:23.527 5646.178 - 5671.385: 0.7874% ( 1) 00:08:23.527 6049.477 - 6074.683: 0.8120% ( 4) 00:08:23.527 6074.683 - 6099.889: 0.8797% ( 11) 00:08:23.527 6099.889 - 6125.095: 1.0150% ( 22) 00:08:23.527 6125.095 - 6150.302: 1.2980% ( 46) 00:08:23.527 6150.302 - 6175.508: 1.7470% ( 73) 00:08:23.527 6175.508 - 6200.714: 2.5160% ( 125) 00:08:23.527 6200.714 - 6225.920: 3.1065% ( 96) 00:08:23.527 6225.920 - 6251.126: 3.7463% ( 104) 00:08:23.527 6251.126 - 6276.332: 4.4722% ( 118) 00:08:23.527 6276.332 - 6301.538: 5.1304% ( 107) 00:08:23.527 6301.538 - 6326.745: 5.9363% ( 131) 00:08:23.527 6326.745 - 6351.951: 6.8529% ( 149) 00:08:23.527 6351.951 - 6377.157: 7.8187% ( 157) 00:08:23.527 6377.157 - 6402.363: 9.0182% ( 195) 00:08:23.527 6402.363 - 6427.569: 10.1993% ( 192) 00:08:23.527 6427.569 - 6452.775: 11.3066% ( 180) 00:08:23.527 6452.775 - 6503.188: 13.6873% ( 387) 00:08:23.527 6503.188 - 6553.600: 16.6093% ( 475) 00:08:23.527 6553.600 - 6604.012: 19.6174% ( 489) 00:08:23.527 6604.012 - 6654.425: 22.4717% ( 464) 00:08:23.527 6654.425 - 6704.837: 25.2953% ( 459) 00:08:23.527 6704.837 - 6755.249: 28.2665% ( 483) 00:08:23.527 6755.249 - 6805.662: 31.2192% ( 480) 00:08:23.527 6805.662 - 6856.074: 34.2212% ( 488) 00:08:23.527 6856.074 - 6906.486: 37.1801% ( 481) 00:08:23.527 6906.486 - 6956.898: 40.2128% ( 493) 00:08:23.527 6956.898 - 7007.311: 43.1656% ( 480) 00:08:23.527 7007.311 - 7057.723: 45.8415% ( 435) 00:08:23.527 7057.723 - 7108.135: 48.1730% ( 379) 00:08:23.527 7108.135 - 7158.548: 50.3076% ( 347) 00:08:23.527 7158.548 - 7208.960: 52.3561% ( 333) 00:08:23.527 7208.960 - 7259.372: 54.0846% ( 281) 00:08:23.527 7259.372 - 7309.785: 55.4441% ( 221) 00:08:23.527 7309.785 - 7360.197: 56.5207% ( 175) 00:08:23.527 7360.197 - 7410.609: 57.4003% ( 143) 00:08:23.527 7410.609 - 7461.022: 58.1693% ( 125) 00:08:23.527 7461.022 - 7511.434: 58.7660% ( 97) 00:08:23.527 7511.434 - 7561.846: 59.3750% ( 99) 00:08:23.527 7561.846 - 7612.258: 59.8917% ( 84) 00:08:23.527 7612.258 - 7662.671: 60.4023% ( 83) 00:08:23.527 7662.671 - 7713.083: 60.9067% ( 82) 00:08:23.527 7713.083 - 7763.495: 61.3804% ( 77) 00:08:23.527 7763.495 - 7813.908: 61.9279% ( 89) 00:08:23.527 7813.908 - 7864.320: 62.4077% ( 78) 00:08:23.527 7864.320 - 7914.732: 62.9183% ( 83) 00:08:23.527 7914.732 - 7965.145: 63.4719% ( 90) 00:08:23.527 7965.145 - 8015.557: 63.9702% ( 81) 00:08:23.527 8015.557 - 8065.969: 64.4993% ( 86) 00:08:23.527 8065.969 - 8116.382: 65.1206% ( 101) 00:08:23.527 8116.382 - 8166.794: 65.8034% ( 111) 00:08:23.527 8166.794 - 8217.206: 66.5600% ( 123) 00:08:23.527 8217.206 - 8267.618: 67.2982% ( 120) 00:08:23.527 8267.618 - 8318.031: 68.1041% ( 131) 00:08:23.527 8318.031 - 8368.443: 68.8177% ( 116) 00:08:23.527 8368.443 - 8418.855: 69.5928% ( 126) 00:08:23.527 8418.855 - 8469.268: 70.4232% ( 135) 00:08:23.527 8469.268 - 8519.680: 71.2660% ( 137) 00:08:23.527 8519.680 - 8570.092: 72.2502% ( 160) 00:08:23.527 8570.092 - 8620.505: 73.2406% ( 161) 00:08:23.527 8620.505 - 8670.917: 74.2741% ( 168) 00:08:23.527 8670.917 - 8721.329: 75.3014% ( 167) 00:08:23.527 8721.329 - 8771.742: 76.4395% ( 185) 00:08:23.527 8771.742 - 8822.154: 77.5221% ( 176) 00:08:23.527 8822.154 - 8872.566: 78.6848% ( 189) 00:08:23.527 8872.566 - 8922.978: 79.8290% ( 186) 00:08:23.527 8922.978 - 8973.391: 80.9978% ( 190) 00:08:23.527 8973.391 - 9023.803: 82.0866% ( 177) 00:08:23.527 9023.803 - 9074.215: 83.1816% ( 178) 00:08:23.527 9074.215 - 9124.628: 84.1166% ( 152) 00:08:23.527 9124.628 - 9175.040: 85.0455% ( 151) 00:08:23.527 9175.040 - 9225.452: 85.8821% ( 136) 00:08:23.527 9225.452 - 9275.865: 86.6941% ( 132) 00:08:23.527 9275.865 - 9326.277: 87.5123% ( 133) 00:08:23.527 9326.277 - 9376.689: 88.2997% ( 128) 00:08:23.527 9376.689 - 9427.102: 88.9518% ( 106) 00:08:23.527 9427.102 - 9477.514: 89.5362% ( 95) 00:08:23.527 9477.514 - 9527.926: 90.0221% ( 79) 00:08:23.527 9527.926 - 9578.338: 90.4958% ( 77) 00:08:23.527 9578.338 - 9628.751: 90.9018% ( 66) 00:08:23.527 9628.751 - 9679.163: 91.2832% ( 62) 00:08:23.527 9679.163 - 9729.575: 91.5908% ( 50) 00:08:23.527 9729.575 - 9779.988: 91.8799% ( 47) 00:08:23.527 9779.988 - 9830.400: 92.1075% ( 37) 00:08:23.527 9830.400 - 9880.812: 92.3290% ( 36) 00:08:23.527 9880.812 - 9931.225: 92.4889% ( 26) 00:08:23.527 9931.225 - 9981.637: 92.6243% ( 22) 00:08:23.527 9981.637 - 10032.049: 92.7534% ( 21) 00:08:23.527 10032.049 - 10082.462: 92.8580% ( 17) 00:08:23.527 10082.462 - 10132.874: 92.9749% ( 19) 00:08:23.527 10132.874 - 10183.286: 93.1164% ( 23) 00:08:23.527 10183.286 - 10233.698: 93.2148% ( 16) 00:08:23.527 10233.698 - 10284.111: 93.3071% ( 15) 00:08:23.527 10284.111 - 10334.523: 93.3932% ( 14) 00:08:23.527 10334.523 - 10384.935: 93.4978% ( 17) 00:08:23.527 10384.935 - 10435.348: 93.6454% ( 24) 00:08:23.527 10435.348 - 10485.760: 93.7685% ( 20) 00:08:23.527 10485.760 - 10536.172: 93.8915% ( 20) 00:08:23.527 10536.172 - 10586.585: 93.9899% ( 16) 00:08:23.527 10586.585 - 10636.997: 94.1006% ( 18) 00:08:23.527 10636.997 - 10687.409: 94.2175% ( 19) 00:08:23.527 10687.409 - 10737.822: 94.3221% ( 17) 00:08:23.527 10737.822 - 10788.234: 94.4205% ( 16) 00:08:23.527 10788.234 - 10838.646: 94.5128% ( 15) 00:08:23.527 10838.646 - 10889.058: 94.5989% ( 14) 00:08:23.527 10889.058 - 10939.471: 94.6850% ( 14) 00:08:23.527 10939.471 - 10989.883: 94.7466% ( 10) 00:08:23.527 10989.883 - 11040.295: 94.8081% ( 10) 00:08:23.527 11040.295 - 11090.708: 94.8696% ( 10) 00:08:23.527 11090.708 - 11141.120: 94.9311% ( 10) 00:08:23.527 11141.120 - 11191.532: 95.0295% ( 16) 00:08:23.527 11191.532 - 11241.945: 95.1403% ( 18) 00:08:23.527 11241.945 - 11292.357: 95.2141% ( 12) 00:08:23.527 11292.357 - 11342.769: 95.2879% ( 12) 00:08:23.527 11342.769 - 11393.182: 95.3433% ( 9) 00:08:23.527 11393.182 - 11443.594: 95.4048% ( 10) 00:08:23.527 11443.594 - 11494.006: 95.4847% ( 13) 00:08:23.527 11494.006 - 11544.418: 95.5709% ( 14) 00:08:23.527 11544.418 - 11594.831: 95.6816% ( 18) 00:08:23.527 11594.831 - 11645.243: 95.7862% ( 17) 00:08:23.527 11645.243 - 11695.655: 95.8723% ( 14) 00:08:23.527 11695.655 - 11746.068: 95.9584% ( 14) 00:08:23.527 11746.068 - 11796.480: 96.0753% ( 19) 00:08:23.527 11796.480 - 11846.892: 96.1676% ( 15) 00:08:23.527 11846.892 - 11897.305: 96.2783% ( 18) 00:08:23.527 11897.305 - 11947.717: 96.3829% ( 17) 00:08:23.527 11947.717 - 11998.129: 96.4813% ( 16) 00:08:23.527 11998.129 - 12048.542: 96.5920% ( 18) 00:08:23.527 12048.542 - 12098.954: 96.6658% ( 12) 00:08:23.527 12098.954 - 12149.366: 96.7520% ( 14) 00:08:23.527 12149.366 - 12199.778: 96.8442% ( 15) 00:08:23.527 12199.778 - 12250.191: 96.9119% ( 11) 00:08:23.527 12250.191 - 12300.603: 96.9734% ( 10) 00:08:23.527 12300.603 - 12351.015: 97.0288% ( 9) 00:08:23.527 12351.015 - 12401.428: 97.1026% ( 12) 00:08:23.527 12401.428 - 12451.840: 97.1641% ( 10) 00:08:23.527 12451.840 - 12502.252: 97.2256% ( 10) 00:08:23.527 12502.252 - 12552.665: 97.2872% ( 10) 00:08:23.527 12552.665 - 12603.077: 97.3487% ( 10) 00:08:23.527 12603.077 - 12653.489: 97.4102% ( 10) 00:08:23.527 12653.489 - 12703.902: 97.4779% ( 11) 00:08:23.527 12703.902 - 12754.314: 97.5455% ( 11) 00:08:23.527 12754.314 - 12804.726: 97.6070% ( 10) 00:08:23.527 12804.726 - 12855.138: 97.6624% ( 9) 00:08:23.527 12855.138 - 12905.551: 97.7301% ( 11) 00:08:23.527 12905.551 - 13006.375: 97.8531% ( 20) 00:08:23.527 13006.375 - 13107.200: 97.9700% ( 19) 00:08:23.527 13107.200 - 13208.025: 98.1238% ( 25) 00:08:23.527 13208.025 - 13308.849: 98.2222% ( 16) 00:08:23.527 13308.849 - 13409.674: 98.3637% ( 23) 00:08:23.527 13409.674 - 13510.498: 98.5113% ( 24) 00:08:23.527 13510.498 - 13611.323: 98.6590% ( 24) 00:08:23.527 13611.323 - 13712.148: 98.7881% ( 21) 00:08:23.527 13712.148 - 13812.972: 98.8681% ( 13) 00:08:23.527 13812.972 - 13913.797: 98.9481% ( 13) 00:08:23.527 13913.797 - 14014.622: 99.0219% ( 12) 00:08:23.527 14014.622 - 14115.446: 99.1019% ( 13) 00:08:23.527 14115.446 - 14216.271: 99.1388% ( 6) 00:08:23.527 14216.271 - 14317.095: 99.1818% ( 7) 00:08:23.527 14317.095 - 14417.920: 99.2126% ( 5) 00:08:23.527 17140.185 - 17241.009: 99.2372% ( 4) 00:08:23.527 17241.009 - 17341.834: 99.2557% ( 3) 00:08:23.527 17341.834 - 17442.658: 99.2741% ( 3) 00:08:23.527 17442.658 - 17543.483: 99.2987% ( 4) 00:08:23.527 17543.483 - 17644.308: 99.3172% ( 3) 00:08:23.527 17644.308 - 17745.132: 99.3356% ( 3) 00:08:23.527 17745.132 - 17845.957: 99.3602% ( 4) 00:08:23.527 17845.957 - 17946.782: 99.3787% ( 3) 00:08:23.527 17946.782 - 18047.606: 99.3971% ( 3) 00:08:23.527 18047.606 - 18148.431: 99.4094% ( 2) 00:08:23.527 18148.431 - 18249.255: 99.4341% ( 4) 00:08:23.527 18249.255 - 18350.080: 99.4525% ( 3) 00:08:23.527 18350.080 - 18450.905: 99.4710% ( 3) 00:08:23.527 18450.905 - 18551.729: 99.4894% ( 3) 00:08:23.527 18551.729 - 18652.554: 99.5079% ( 3) 00:08:23.527 18652.554 - 18753.378: 99.5263% ( 3) 00:08:23.527 18753.378 - 18854.203: 99.5448% ( 3) 00:08:23.527 18854.203 - 18955.028: 99.5694% ( 4) 00:08:23.527 18955.028 - 19055.852: 99.5878% ( 3) 00:08:23.528 19055.852 - 19156.677: 99.6001% ( 2) 00:08:23.528 19156.677 - 19257.502: 99.6063% ( 1) 00:08:23.528 24197.908 - 24298.732: 99.6125% ( 1) 00:08:23.528 24298.732 - 24399.557: 99.6494% ( 6) 00:08:23.528 24399.557 - 24500.382: 99.6863% ( 6) 00:08:23.528 24500.382 - 24601.206: 99.7232% ( 6) 00:08:23.528 24601.206 - 24702.031: 99.7539% ( 5) 00:08:23.528 24702.031 - 24802.855: 99.7847% ( 5) 00:08:23.528 24802.855 - 24903.680: 99.8216% ( 6) 00:08:23.528 24903.680 - 25004.505: 99.8585% ( 6) 00:08:23.528 25004.505 - 25105.329: 99.8954% ( 6) 00:08:23.528 25105.329 - 25206.154: 99.9323% ( 6) 00:08:23.528 25206.154 - 25306.978: 99.9631% ( 5) 00:08:23.528 25306.978 - 25407.803: 100.0000% ( 6) 00:08:23.528 00:08:23.528 20:50:41 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:24.975 Initializing NVMe Controllers 00:08:24.975 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.975 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.975 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.975 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.975 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:24.975 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:24.975 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:24.975 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:24.975 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:24.975 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:24.975 Initialization complete. Launching workers. 00:08:24.975 ======================================================== 00:08:24.975 Latency(us) 00:08:24.976 Device Information : IOPS MiB/s Average min max 00:08:24.976 PCIE (0000:00:10.0) NSID 1 from core 0: 16896.00 198.00 7582.02 5641.07 26789.80 00:08:24.976 PCIE (0000:00:11.0) NSID 1 from core 0: 16896.00 198.00 7576.21 5825.56 25836.68 00:08:24.976 PCIE (0000:00:13.0) NSID 1 from core 0: 16896.00 198.00 7569.39 5876.47 25061.73 00:08:24.976 PCIE (0000:00:12.0) NSID 1 from core 0: 16896.00 198.00 7561.89 5828.66 24244.33 00:08:24.976 PCIE (0000:00:12.0) NSID 2 from core 0: 16896.00 198.00 7554.41 5524.53 23634.56 00:08:24.976 PCIE (0000:00:12.0) NSID 3 from core 0: 16896.00 198.00 7546.96 4731.66 22694.24 00:08:24.976 ======================================================== 00:08:24.976 Total : 101375.98 1188.00 7565.15 4731.66 26789.80 00:08:24.976 00:08:24.976 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:24.976 ================================================================================= 00:08:24.976 1.00000% : 6150.302us 00:08:24.976 10.00000% : 6503.188us 00:08:24.976 25.00000% : 6654.425us 00:08:24.976 50.00000% : 6856.074us 00:08:24.976 75.00000% : 7309.785us 00:08:24.976 90.00000% : 10032.049us 00:08:24.976 95.00000% : 13006.375us 00:08:24.976 98.00000% : 14216.271us 00:08:24.976 99.00000% : 15022.868us 00:08:24.976 99.50000% : 18148.431us 00:08:24.976 99.90000% : 26617.698us 00:08:24.976 99.99000% : 26819.348us 00:08:24.976 99.99900% : 26819.348us 00:08:24.976 99.99990% : 26819.348us 00:08:24.976 99.99999% : 26819.348us 00:08:24.976 00:08:24.976 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:24.976 ================================================================================= 00:08:24.976 1.00000% : 6225.920us 00:08:24.976 10.00000% : 6604.012us 00:08:24.976 25.00000% : 6704.837us 00:08:24.976 50.00000% : 6856.074us 00:08:24.976 75.00000% : 7108.135us 00:08:24.976 90.00000% : 10233.698us 00:08:24.976 95.00000% : 12804.726us 00:08:24.976 98.00000% : 14417.920us 00:08:24.976 99.00000% : 15426.166us 00:08:24.976 99.50000% : 18955.028us 00:08:24.976 99.90000% : 25609.452us 00:08:24.976 99.99000% : 26012.751us 00:08:24.976 99.99900% : 26012.751us 00:08:24.976 99.99990% : 26012.751us 00:08:24.976 99.99999% : 26012.751us 00:08:24.976 00:08:24.976 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:24.976 ================================================================================= 00:08:24.976 1.00000% : 6225.920us 00:08:24.976 10.00000% : 6604.012us 00:08:24.976 25.00000% : 6704.837us 00:08:24.976 50.00000% : 6856.074us 00:08:24.976 75.00000% : 7108.135us 00:08:24.976 90.00000% : 10032.049us 00:08:24.976 95.00000% : 12603.077us 00:08:24.976 98.00000% : 14619.569us 00:08:24.976 99.00000% : 15426.166us 00:08:24.976 99.50000% : 19559.975us 00:08:24.976 99.90000% : 24399.557us 00:08:24.976 99.99000% : 25105.329us 00:08:24.976 99.99900% : 25105.329us 00:08:24.976 99.99990% : 25105.329us 00:08:24.976 99.99999% : 25105.329us 00:08:24.976 00:08:24.976 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:24.976 ================================================================================= 00:08:24.976 1.00000% : 6200.714us 00:08:24.976 10.00000% : 6604.012us 00:08:24.976 25.00000% : 6704.837us 00:08:24.976 50.00000% : 6856.074us 00:08:24.976 75.00000% : 7108.135us 00:08:24.976 90.00000% : 10334.523us 00:08:24.976 95.00000% : 12905.551us 00:08:24.976 98.00000% : 14720.394us 00:08:24.976 99.00000% : 15426.166us 00:08:24.976 99.50000% : 19660.800us 00:08:24.976 99.90000% : 23592.960us 00:08:24.976 99.99000% : 24298.732us 00:08:24.976 99.99900% : 24298.732us 00:08:24.976 99.99990% : 24298.732us 00:08:24.976 99.99999% : 24298.732us 00:08:24.976 00:08:24.976 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:24.976 ================================================================================= 00:08:24.976 1.00000% : 6200.714us 00:08:24.976 10.00000% : 6604.012us 00:08:24.976 25.00000% : 6704.837us 00:08:24.976 50.00000% : 6856.074us 00:08:24.976 75.00000% : 7158.548us 00:08:24.976 90.00000% : 9880.812us 00:08:24.976 95.00000% : 12905.551us 00:08:24.976 98.00000% : 14317.095us 00:08:24.976 99.00000% : 15325.342us 00:08:24.976 99.50000% : 19660.800us 00:08:24.976 99.90000% : 23391.311us 00:08:24.976 99.99000% : 23693.785us 00:08:24.976 99.99900% : 23693.785us 00:08:24.976 99.99990% : 23693.785us 00:08:24.976 99.99999% : 23693.785us 00:08:24.976 00:08:24.976 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:24.976 ================================================================================= 00:08:24.976 1.00000% : 6175.508us 00:08:24.976 10.00000% : 6604.012us 00:08:24.976 25.00000% : 6704.837us 00:08:24.976 50.00000% : 6856.074us 00:08:24.976 75.00000% : 7108.135us 00:08:24.976 90.00000% : 9427.102us 00:08:24.976 95.00000% : 13107.200us 00:08:24.976 98.00000% : 14014.622us 00:08:24.976 99.00000% : 15022.868us 00:08:24.976 99.50000% : 19055.852us 00:08:24.976 99.90000% : 21979.766us 00:08:24.976 99.99000% : 22685.538us 00:08:24.976 99.99900% : 22786.363us 00:08:24.976 99.99990% : 22786.363us 00:08:24.976 99.99999% : 22786.363us 00:08:24.976 00:08:24.976 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:24.976 ============================================================================== 00:08:24.976 Range in us Cumulative IO count 00:08:24.976 5620.972 - 5646.178: 0.0059% ( 1) 00:08:24.976 5646.178 - 5671.385: 0.0117% ( 1) 00:08:24.976 5671.385 - 5696.591: 0.0235% ( 2) 00:08:24.976 5696.591 - 5721.797: 0.0294% ( 1) 00:08:24.976 5721.797 - 5747.003: 0.0352% ( 1) 00:08:24.976 5747.003 - 5772.209: 0.0411% ( 1) 00:08:24.976 5772.209 - 5797.415: 0.0470% ( 1) 00:08:24.976 5797.415 - 5822.622: 0.0646% ( 3) 00:08:24.976 5822.622 - 5847.828: 0.0822% ( 3) 00:08:24.976 5847.828 - 5873.034: 0.0940% ( 2) 00:08:24.976 5873.034 - 5898.240: 0.1116% ( 3) 00:08:24.976 5898.240 - 5923.446: 0.1292% ( 3) 00:08:24.976 5923.446 - 5948.652: 0.1703% ( 7) 00:08:24.976 5948.652 - 5973.858: 0.2173% ( 8) 00:08:24.976 5973.858 - 5999.065: 0.2702% ( 9) 00:08:24.976 5999.065 - 6024.271: 0.3466% ( 13) 00:08:24.976 6024.271 - 6049.477: 0.4582% ( 19) 00:08:24.976 6049.477 - 6074.683: 0.5992% ( 24) 00:08:24.976 6074.683 - 6099.889: 0.7695% ( 29) 00:08:24.976 6099.889 - 6125.095: 0.9164% ( 25) 00:08:24.976 6125.095 - 6150.302: 1.0926% ( 30) 00:08:24.976 6150.302 - 6175.508: 1.3040% ( 36) 00:08:24.976 6175.508 - 6200.714: 1.5742% ( 46) 00:08:24.976 6200.714 - 6225.920: 1.9208% ( 59) 00:08:24.976 6225.920 - 6251.126: 2.2556% ( 57) 00:08:24.976 6251.126 - 6276.332: 2.6081% ( 60) 00:08:24.976 6276.332 - 6301.538: 3.2895% ( 116) 00:08:24.976 6301.538 - 6326.745: 3.8945% ( 103) 00:08:24.976 6326.745 - 6351.951: 4.7169% ( 140) 00:08:24.976 6351.951 - 6377.157: 5.5451% ( 141) 00:08:24.976 6377.157 - 6402.363: 6.3381% ( 135) 00:08:24.976 6402.363 - 6427.569: 7.4366% ( 187) 00:08:24.976 6427.569 - 6452.775: 9.0754% ( 279) 00:08:24.976 6452.775 - 6503.188: 12.8642% ( 645) 00:08:24.976 6503.188 - 6553.600: 17.6104% ( 808) 00:08:24.976 6553.600 - 6604.012: 23.2789% ( 965) 00:08:24.976 6604.012 - 6654.425: 29.6699% ( 1088) 00:08:24.976 6654.425 - 6704.837: 36.0550% ( 1087) 00:08:24.976 6704.837 - 6755.249: 41.8644% ( 989) 00:08:24.976 6755.249 - 6805.662: 47.2157% ( 911) 00:08:24.976 6805.662 - 6856.074: 51.3980% ( 712) 00:08:24.976 6856.074 - 6906.486: 54.7815% ( 576) 00:08:24.976 6906.486 - 6956.898: 57.8536% ( 523) 00:08:24.976 6956.898 - 7007.311: 61.1255% ( 557) 00:08:24.976 7007.311 - 7057.723: 64.1624% ( 517) 00:08:24.976 7057.723 - 7108.135: 67.2110% ( 519) 00:08:24.976 7108.135 - 7158.548: 70.1422% ( 499) 00:08:24.976 7158.548 - 7208.960: 72.8325% ( 458) 00:08:24.976 7208.960 - 7259.372: 74.7709% ( 330) 00:08:24.976 7259.372 - 7309.785: 76.6447% ( 319) 00:08:24.976 7309.785 - 7360.197: 78.0369% ( 237) 00:08:24.976 7360.197 - 7410.609: 79.0648% ( 175) 00:08:24.976 7410.609 - 7461.022: 79.8637% ( 136) 00:08:24.976 7461.022 - 7511.434: 80.7331% ( 148) 00:08:24.976 7511.434 - 7561.846: 81.3029% ( 97) 00:08:24.976 7561.846 - 7612.258: 81.7787% ( 81) 00:08:24.976 7612.258 - 7662.671: 82.2427% ( 79) 00:08:24.976 7662.671 - 7713.083: 82.6245% ( 65) 00:08:24.976 7713.083 - 7763.495: 83.0298% ( 69) 00:08:24.976 7763.495 - 7813.908: 83.3353% ( 52) 00:08:24.976 7813.908 - 7864.320: 83.6525% ( 54) 00:08:24.976 7864.320 - 7914.732: 83.9756% ( 55) 00:08:24.976 7914.732 - 7965.145: 84.2810% ( 52) 00:08:24.976 7965.145 - 8015.557: 84.6041% ( 55) 00:08:24.976 8015.557 - 8065.969: 84.9624% ( 61) 00:08:24.976 8065.969 - 8116.382: 85.1328% ( 29) 00:08:24.976 8116.382 - 8166.794: 85.3677% ( 40) 00:08:24.976 8166.794 - 8217.206: 85.5557% ( 32) 00:08:24.976 8217.206 - 8267.618: 85.7084% ( 26) 00:08:24.976 8267.618 - 8318.031: 85.8611% ( 26) 00:08:24.976 8318.031 - 8368.443: 86.0021% ( 24) 00:08:24.976 8368.443 - 8418.855: 86.1020% ( 17) 00:08:24.976 8418.855 - 8469.268: 86.1960% ( 16) 00:08:24.976 8469.268 - 8519.680: 86.3076% ( 19) 00:08:24.976 8519.680 - 8570.092: 86.5014% ( 33) 00:08:24.976 8570.092 - 8620.505: 86.6953% ( 33) 00:08:24.976 8620.505 - 8670.917: 86.8245% ( 22) 00:08:24.976 8670.917 - 8721.329: 86.9243% ( 17) 00:08:24.977 8721.329 - 8771.742: 87.0653% ( 24) 00:08:24.977 8771.742 - 8822.154: 87.1534% ( 15) 00:08:24.977 8822.154 - 8872.566: 87.2415% ( 15) 00:08:24.977 8872.566 - 8922.978: 87.3766% ( 23) 00:08:24.977 8922.978 - 8973.391: 87.4941% ( 20) 00:08:24.977 8973.391 - 9023.803: 87.5764% ( 14) 00:08:24.977 9023.803 - 9074.215: 87.6645% ( 15) 00:08:24.977 9074.215 - 9124.628: 87.7643% ( 17) 00:08:24.977 9124.628 - 9175.040: 87.8877% ( 21) 00:08:24.977 9175.040 - 9225.452: 88.0874% ( 34) 00:08:24.977 9225.452 - 9275.865: 88.2989% ( 36) 00:08:24.977 9275.865 - 9326.277: 88.4575% ( 27) 00:08:24.977 9326.277 - 9376.689: 88.5750% ( 20) 00:08:24.977 9376.689 - 9427.102: 88.7159% ( 24) 00:08:24.977 9427.102 - 9477.514: 88.8628% ( 25) 00:08:24.977 9477.514 - 9527.926: 88.9450% ( 14) 00:08:24.977 9527.926 - 9578.338: 89.1036% ( 27) 00:08:24.977 9578.338 - 9628.751: 89.2211% ( 20) 00:08:24.977 9628.751 - 9679.163: 89.2975% ( 13) 00:08:24.977 9679.163 - 9729.575: 89.4032% ( 18) 00:08:24.977 9729.575 - 9779.988: 89.4972% ( 16) 00:08:24.977 9779.988 - 9830.400: 89.5970% ( 17) 00:08:24.977 9830.400 - 9880.812: 89.6675% ( 12) 00:08:24.977 9880.812 - 9931.225: 89.7909% ( 21) 00:08:24.977 9931.225 - 9981.637: 89.9260% ( 23) 00:08:24.977 9981.637 - 10032.049: 90.0023% ( 13) 00:08:24.977 10032.049 - 10082.462: 90.0905% ( 15) 00:08:24.977 10082.462 - 10132.874: 90.1844% ( 16) 00:08:24.977 10132.874 - 10183.286: 90.3019% ( 20) 00:08:24.977 10183.286 - 10233.698: 90.3959% ( 16) 00:08:24.977 10233.698 - 10284.111: 90.5016% ( 18) 00:08:24.977 10284.111 - 10334.523: 90.6367% ( 23) 00:08:24.977 10334.523 - 10384.935: 90.8188% ( 31) 00:08:24.977 10384.935 - 10435.348: 90.9481% ( 22) 00:08:24.977 10435.348 - 10485.760: 91.0773% ( 22) 00:08:24.977 10485.760 - 10536.172: 91.1654% ( 15) 00:08:24.977 10536.172 - 10586.585: 91.2829% ( 20) 00:08:24.977 10586.585 - 10636.997: 91.3710% ( 15) 00:08:24.977 10636.997 - 10687.409: 91.4885% ( 20) 00:08:24.977 10687.409 - 10737.822: 91.5590% ( 12) 00:08:24.977 10737.822 - 10788.234: 91.6530% ( 16) 00:08:24.977 10788.234 - 10838.646: 91.7293% ( 13) 00:08:24.977 10838.646 - 10889.058: 91.8174% ( 15) 00:08:24.977 10889.058 - 10939.471: 91.8762% ( 10) 00:08:24.977 10939.471 - 10989.883: 91.9702% ( 16) 00:08:24.977 10989.883 - 11040.295: 92.0113% ( 7) 00:08:24.977 11040.295 - 11090.708: 92.0935% ( 14) 00:08:24.977 11090.708 - 11141.120: 92.1816% ( 15) 00:08:24.977 11141.120 - 11191.532: 92.2639% ( 14) 00:08:24.977 11191.532 - 11241.945: 92.3696% ( 18) 00:08:24.977 11241.945 - 11292.357: 92.4107% ( 7) 00:08:24.977 11292.357 - 11342.769: 92.4518% ( 7) 00:08:24.977 11342.769 - 11393.182: 92.6222% ( 29) 00:08:24.977 11393.182 - 11443.594: 92.7808% ( 27) 00:08:24.977 11443.594 - 11494.006: 92.8571% ( 13) 00:08:24.977 11494.006 - 11544.418: 92.9159% ( 10) 00:08:24.977 11544.418 - 11594.831: 92.9629% ( 8) 00:08:24.977 11594.831 - 11645.243: 93.0216% ( 10) 00:08:24.977 11645.243 - 11695.655: 93.0686% ( 8) 00:08:24.977 11695.655 - 11746.068: 93.1039% ( 6) 00:08:24.977 11746.068 - 11796.480: 93.1508% ( 8) 00:08:24.977 11796.480 - 11846.892: 93.1861% ( 6) 00:08:24.977 11846.892 - 11897.305: 93.2272% ( 7) 00:08:24.977 11897.305 - 11947.717: 93.2918% ( 11) 00:08:24.977 11947.717 - 11998.129: 93.3447% ( 9) 00:08:24.977 11998.129 - 12048.542: 93.4034% ( 10) 00:08:24.977 12048.542 - 12098.954: 93.5092% ( 18) 00:08:24.977 12098.954 - 12149.366: 93.5973% ( 15) 00:08:24.977 12149.366 - 12199.778: 93.6736% ( 13) 00:08:24.977 12199.778 - 12250.191: 93.7500% ( 13) 00:08:24.977 12250.191 - 12300.603: 93.8146% ( 11) 00:08:24.977 12300.603 - 12351.015: 93.8969% ( 14) 00:08:24.977 12351.015 - 12401.428: 93.9497% ( 9) 00:08:24.977 12401.428 - 12451.840: 94.0320% ( 14) 00:08:24.977 12451.840 - 12502.252: 94.1436% ( 19) 00:08:24.977 12502.252 - 12552.665: 94.2375% ( 16) 00:08:24.977 12552.665 - 12603.077: 94.3139% ( 13) 00:08:24.977 12603.077 - 12653.489: 94.4549% ( 24) 00:08:24.977 12653.489 - 12703.902: 94.5430% ( 15) 00:08:24.977 12703.902 - 12754.314: 94.6487% ( 18) 00:08:24.977 12754.314 - 12804.726: 94.7545% ( 18) 00:08:24.977 12804.726 - 12855.138: 94.8661% ( 19) 00:08:24.977 12855.138 - 12905.551: 94.9366% ( 12) 00:08:24.977 12905.551 - 13006.375: 95.1715% ( 40) 00:08:24.977 13006.375 - 13107.200: 95.4241% ( 43) 00:08:24.977 13107.200 - 13208.025: 95.7354% ( 53) 00:08:24.977 13208.025 - 13308.849: 96.0526% ( 54) 00:08:24.977 13308.849 - 13409.674: 96.2876% ( 40) 00:08:24.977 13409.674 - 13510.498: 96.5578% ( 46) 00:08:24.977 13510.498 - 13611.323: 96.7928% ( 40) 00:08:24.977 13611.323 - 13712.148: 97.0630% ( 46) 00:08:24.977 13712.148 - 13812.972: 97.3097% ( 42) 00:08:24.977 13812.972 - 13913.797: 97.5035% ( 33) 00:08:24.977 13913.797 - 14014.622: 97.6680% ( 28) 00:08:24.977 14014.622 - 14115.446: 97.8383% ( 29) 00:08:24.977 14115.446 - 14216.271: 98.0263% ( 32) 00:08:24.977 14216.271 - 14317.095: 98.0851% ( 10) 00:08:24.977 14317.095 - 14417.920: 98.1908% ( 18) 00:08:24.977 14417.920 - 14518.745: 98.3141% ( 21) 00:08:24.977 14518.745 - 14619.569: 98.5902% ( 47) 00:08:24.977 14619.569 - 14720.394: 98.7841% ( 33) 00:08:24.977 14720.394 - 14821.218: 98.8957% ( 19) 00:08:24.977 14821.218 - 14922.043: 98.9662% ( 12) 00:08:24.977 14922.043 - 15022.868: 99.0132% ( 8) 00:08:24.977 15022.868 - 15123.692: 99.0543% ( 7) 00:08:24.977 15123.692 - 15224.517: 99.1130% ( 10) 00:08:24.977 15224.517 - 15325.342: 99.1424% ( 5) 00:08:24.977 15325.342 - 15426.166: 99.1776% ( 6) 00:08:24.977 15426.166 - 15526.991: 99.2188% ( 7) 00:08:24.977 15526.991 - 15627.815: 99.2364% ( 3) 00:08:24.977 15627.815 - 15728.640: 99.2481% ( 2) 00:08:24.977 17341.834 - 17442.658: 99.2540% ( 1) 00:08:24.977 17442.658 - 17543.483: 99.2775% ( 4) 00:08:24.977 17543.483 - 17644.308: 99.3010% ( 4) 00:08:24.977 17644.308 - 17745.132: 99.3304% ( 5) 00:08:24.977 17745.132 - 17845.957: 99.3656% ( 6) 00:08:24.977 17845.957 - 17946.782: 99.3950% ( 5) 00:08:24.977 17946.782 - 18047.606: 99.4537% ( 10) 00:08:24.977 18047.606 - 18148.431: 99.5125% ( 10) 00:08:24.977 18148.431 - 18249.255: 99.5242% ( 2) 00:08:24.977 18249.255 - 18350.080: 99.5359% ( 2) 00:08:24.977 18350.080 - 18450.905: 99.5536% ( 3) 00:08:24.977 18450.905 - 18551.729: 99.5594% ( 1) 00:08:24.977 18551.729 - 18652.554: 99.5712% ( 2) 00:08:24.977 18652.554 - 18753.378: 99.5771% ( 1) 00:08:24.977 18753.378 - 18854.203: 99.6064% ( 5) 00:08:24.977 18854.203 - 18955.028: 99.6241% ( 3) 00:08:24.977 25407.803 - 25508.628: 99.6417% ( 3) 00:08:24.977 25508.628 - 25609.452: 99.6652% ( 4) 00:08:24.977 25609.452 - 25710.277: 99.7004% ( 6) 00:08:24.977 25710.277 - 25811.102: 99.7239% ( 4) 00:08:24.977 25811.102 - 26012.751: 99.7768% ( 9) 00:08:24.977 26012.751 - 26214.400: 99.8297% ( 9) 00:08:24.977 26214.400 - 26416.049: 99.8943% ( 11) 00:08:24.977 26416.049 - 26617.698: 99.9530% ( 10) 00:08:24.977 26617.698 - 26819.348: 100.0000% ( 8) 00:08:24.977 00:08:24.977 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:24.977 ============================================================================== 00:08:24.977 Range in us Cumulative IO count 00:08:24.977 5822.622 - 5847.828: 0.0176% ( 3) 00:08:24.977 5847.828 - 5873.034: 0.0294% ( 2) 00:08:24.977 5873.034 - 5898.240: 0.0529% ( 4) 00:08:24.977 5898.240 - 5923.446: 0.0764% ( 4) 00:08:24.977 5923.446 - 5948.652: 0.1057% ( 5) 00:08:24.977 5948.652 - 5973.858: 0.1351% ( 5) 00:08:24.977 5973.858 - 5999.065: 0.2350% ( 17) 00:08:24.977 5999.065 - 6024.271: 0.3113% ( 13) 00:08:24.977 6024.271 - 6049.477: 0.3348% ( 4) 00:08:24.977 6049.477 - 6074.683: 0.3701% ( 6) 00:08:24.977 6074.683 - 6099.889: 0.4112% ( 7) 00:08:24.977 6099.889 - 6125.095: 0.4464% ( 6) 00:08:24.977 6125.095 - 6150.302: 0.5404% ( 16) 00:08:24.977 6150.302 - 6175.508: 0.6227% ( 14) 00:08:24.977 6175.508 - 6200.714: 0.9633% ( 58) 00:08:24.977 6200.714 - 6225.920: 1.2394% ( 47) 00:08:24.977 6225.920 - 6251.126: 1.5801% ( 58) 00:08:24.977 6251.126 - 6276.332: 1.9737% ( 67) 00:08:24.977 6276.332 - 6301.538: 2.4554% ( 82) 00:08:24.977 6301.538 - 6326.745: 2.9429% ( 83) 00:08:24.977 6326.745 - 6351.951: 3.1602% ( 37) 00:08:24.977 6351.951 - 6377.157: 3.4539% ( 50) 00:08:24.977 6377.157 - 6402.363: 3.7300% ( 47) 00:08:24.977 6402.363 - 6427.569: 4.1823% ( 77) 00:08:24.977 6427.569 - 6452.775: 4.7051% ( 89) 00:08:24.977 6452.775 - 6503.188: 6.8550% ( 366) 00:08:24.977 6503.188 - 6553.600: 9.5336% ( 456) 00:08:24.977 6553.600 - 6604.012: 13.8687% ( 738) 00:08:24.977 6604.012 - 6654.425: 19.9307% ( 1032) 00:08:24.977 6654.425 - 6704.837: 26.9091% ( 1188) 00:08:24.977 6704.837 - 6755.249: 35.3266% ( 1433) 00:08:24.977 6755.249 - 6805.662: 44.2375% ( 1517) 00:08:24.977 6805.662 - 6856.074: 52.8783% ( 1471) 00:08:24.977 6856.074 - 6906.486: 59.7921% ( 1177) 00:08:24.977 6906.486 - 6956.898: 65.1492% ( 912) 00:08:24.977 6956.898 - 7007.311: 69.5606% ( 751) 00:08:24.977 7007.311 - 7057.723: 73.1144% ( 605) 00:08:24.977 7057.723 - 7108.135: 75.4641% ( 400) 00:08:24.977 7108.135 - 7158.548: 76.8092% ( 229) 00:08:24.977 7158.548 - 7208.960: 78.0486% ( 211) 00:08:24.977 7208.960 - 7259.372: 78.8945% ( 144) 00:08:24.977 7259.372 - 7309.785: 79.7345% ( 143) 00:08:24.977 7309.785 - 7360.197: 80.2984% ( 96) 00:08:24.977 7360.197 - 7410.609: 80.6567% ( 61) 00:08:24.977 7410.609 - 7461.022: 81.1266% ( 80) 00:08:24.977 7461.022 - 7511.434: 81.5378% ( 70) 00:08:24.978 7511.434 - 7561.846: 82.0195% ( 82) 00:08:24.978 7561.846 - 7612.258: 82.4366% ( 71) 00:08:24.978 7612.258 - 7662.671: 82.9770% ( 92) 00:08:24.978 7662.671 - 7713.083: 83.3588% ( 65) 00:08:24.978 7713.083 - 7763.495: 83.7289% ( 63) 00:08:24.978 7763.495 - 7813.908: 84.2693% ( 92) 00:08:24.978 7813.908 - 7864.320: 84.6158% ( 59) 00:08:24.978 7864.320 - 7914.732: 84.9272% ( 53) 00:08:24.978 7914.732 - 7965.145: 85.3207% ( 67) 00:08:24.978 7965.145 - 8015.557: 85.6438% ( 55) 00:08:24.978 8015.557 - 8065.969: 85.8141% ( 29) 00:08:24.978 8065.969 - 8116.382: 86.0197% ( 35) 00:08:24.978 8116.382 - 8166.794: 86.1372% ( 20) 00:08:24.978 8166.794 - 8217.206: 86.2077% ( 12) 00:08:24.978 8217.206 - 8267.618: 86.2841% ( 13) 00:08:24.978 8267.618 - 8318.031: 86.3722% ( 15) 00:08:24.978 8318.031 - 8368.443: 86.4720% ( 17) 00:08:24.978 8368.443 - 8418.855: 86.5660% ( 16) 00:08:24.978 8418.855 - 8469.268: 86.6776% ( 19) 00:08:24.978 8469.268 - 8519.680: 86.7246% ( 8) 00:08:24.978 8519.680 - 8570.092: 86.7481% ( 4) 00:08:24.978 8570.092 - 8620.505: 86.7951% ( 8) 00:08:24.978 8620.505 - 8670.917: 86.8304% ( 6) 00:08:24.978 8670.917 - 8721.329: 86.8656% ( 6) 00:08:24.978 8721.329 - 8771.742: 86.8950% ( 5) 00:08:24.978 8771.742 - 8822.154: 86.9067% ( 2) 00:08:24.978 8822.154 - 8872.566: 86.9243% ( 3) 00:08:24.978 8872.566 - 8922.978: 86.9361% ( 2) 00:08:24.978 8922.978 - 8973.391: 86.9596% ( 4) 00:08:24.978 8973.391 - 9023.803: 87.0125% ( 9) 00:08:24.978 9023.803 - 9074.215: 87.0947% ( 14) 00:08:24.978 9074.215 - 9124.628: 87.1945% ( 17) 00:08:24.978 9124.628 - 9175.040: 87.5352% ( 58) 00:08:24.978 9175.040 - 9225.452: 87.6997% ( 28) 00:08:24.978 9225.452 - 9275.865: 87.8231% ( 21) 00:08:24.978 9275.865 - 9326.277: 87.9053% ( 14) 00:08:24.978 9326.277 - 9376.689: 88.0287% ( 21) 00:08:24.978 9376.689 - 9427.102: 88.1461% ( 20) 00:08:24.978 9427.102 - 9477.514: 88.2343% ( 15) 00:08:24.978 9477.514 - 9527.926: 88.3341% ( 17) 00:08:24.978 9527.926 - 9578.338: 88.4164% ( 14) 00:08:24.978 9578.338 - 9628.751: 88.5515% ( 23) 00:08:24.978 9628.751 - 9679.163: 88.6807% ( 22) 00:08:24.978 9679.163 - 9729.575: 88.8569% ( 30) 00:08:24.978 9729.575 - 9779.988: 88.9626% ( 18) 00:08:24.978 9779.988 - 9830.400: 89.0331% ( 12) 00:08:24.978 9830.400 - 9880.812: 89.0977% ( 11) 00:08:24.978 9880.812 - 9931.225: 89.1859% ( 15) 00:08:24.978 9931.225 - 9981.637: 89.3092% ( 21) 00:08:24.978 9981.637 - 10032.049: 89.4619% ( 26) 00:08:24.978 10032.049 - 10082.462: 89.6205% ( 27) 00:08:24.978 10082.462 - 10132.874: 89.7791% ( 27) 00:08:24.978 10132.874 - 10183.286: 89.9319% ( 26) 00:08:24.978 10183.286 - 10233.698: 90.0670% ( 23) 00:08:24.978 10233.698 - 10284.111: 90.2021% ( 23) 00:08:24.978 10284.111 - 10334.523: 90.3900% ( 32) 00:08:24.978 10334.523 - 10384.935: 90.5956% ( 35) 00:08:24.978 10384.935 - 10435.348: 90.7601% ( 28) 00:08:24.978 10435.348 - 10485.760: 90.9011% ( 24) 00:08:24.978 10485.760 - 10536.172: 91.0597% ( 27) 00:08:24.978 10536.172 - 10586.585: 91.2242% ( 28) 00:08:24.978 10586.585 - 10636.997: 91.3710% ( 25) 00:08:24.978 10636.997 - 10687.409: 91.5590% ( 32) 00:08:24.978 10687.409 - 10737.822: 91.7058% ( 25) 00:08:24.978 10737.822 - 10788.234: 91.8938% ( 32) 00:08:24.978 10788.234 - 10838.646: 92.0700% ( 30) 00:08:24.978 10838.646 - 10889.058: 92.1875% ( 20) 00:08:24.978 10889.058 - 10939.471: 92.3461% ( 27) 00:08:24.978 10939.471 - 10989.883: 92.4460% ( 17) 00:08:24.978 10989.883 - 11040.295: 92.5341% ( 15) 00:08:24.978 11040.295 - 11090.708: 92.6104% ( 13) 00:08:24.978 11090.708 - 11141.120: 92.6692% ( 10) 00:08:24.978 11141.120 - 11191.532: 92.7103% ( 7) 00:08:24.978 11191.532 - 11241.945: 92.7514% ( 7) 00:08:24.978 11241.945 - 11292.357: 92.7984% ( 8) 00:08:24.978 11292.357 - 11342.769: 92.8571% ( 10) 00:08:24.978 11342.769 - 11393.182: 92.9100% ( 9) 00:08:24.978 11393.182 - 11443.594: 92.9746% ( 11) 00:08:24.978 11443.594 - 11494.006: 93.0627% ( 15) 00:08:24.978 11494.006 - 11544.418: 93.1802% ( 20) 00:08:24.978 11544.418 - 11594.831: 93.2625% ( 14) 00:08:24.978 11594.831 - 11645.243: 93.3799% ( 20) 00:08:24.978 11645.243 - 11695.655: 93.5150% ( 23) 00:08:24.978 11695.655 - 11746.068: 93.6443% ( 22) 00:08:24.978 11746.068 - 11796.480: 93.7559% ( 19) 00:08:24.978 11796.480 - 11846.892: 93.8557% ( 17) 00:08:24.978 11846.892 - 11897.305: 93.9438% ( 15) 00:08:24.978 11897.305 - 11947.717: 94.0143% ( 12) 00:08:24.978 11947.717 - 11998.129: 94.1259% ( 19) 00:08:24.978 11998.129 - 12048.542: 94.2023% ( 13) 00:08:24.978 12048.542 - 12098.954: 94.2610% ( 10) 00:08:24.978 12098.954 - 12149.366: 94.3139% ( 9) 00:08:24.978 12149.366 - 12199.778: 94.3668% ( 9) 00:08:24.978 12199.778 - 12250.191: 94.4255% ( 10) 00:08:24.978 12250.191 - 12300.603: 94.4901% ( 11) 00:08:24.978 12300.603 - 12351.015: 94.5489% ( 10) 00:08:24.978 12351.015 - 12401.428: 94.6135% ( 11) 00:08:24.978 12401.428 - 12451.840: 94.6898% ( 13) 00:08:24.978 12451.840 - 12502.252: 94.7427% ( 9) 00:08:24.978 12502.252 - 12552.665: 94.8015% ( 10) 00:08:24.978 12552.665 - 12603.077: 94.8602% ( 10) 00:08:24.978 12603.077 - 12653.489: 94.9013% ( 7) 00:08:24.978 12653.489 - 12703.902: 94.9424% ( 7) 00:08:24.978 12703.902 - 12754.314: 94.9953% ( 9) 00:08:24.978 12754.314 - 12804.726: 95.0364% ( 7) 00:08:24.978 12804.726 - 12855.138: 95.0658% ( 5) 00:08:24.978 12855.138 - 12905.551: 95.0952% ( 5) 00:08:24.978 12905.551 - 13006.375: 95.1715% ( 13) 00:08:24.978 13006.375 - 13107.200: 95.3125% ( 24) 00:08:24.978 13107.200 - 13208.025: 95.4828% ( 29) 00:08:24.978 13208.025 - 13308.849: 95.6649% ( 31) 00:08:24.978 13308.849 - 13409.674: 96.0350% ( 63) 00:08:24.978 13409.674 - 13510.498: 96.2289% ( 33) 00:08:24.978 13510.498 - 13611.323: 96.4403% ( 36) 00:08:24.978 13611.323 - 13712.148: 96.6988% ( 44) 00:08:24.978 13712.148 - 13812.972: 97.0688% ( 63) 00:08:24.978 13812.972 - 13913.797: 97.3156% ( 42) 00:08:24.978 13913.797 - 14014.622: 97.4859% ( 29) 00:08:24.978 14014.622 - 14115.446: 97.7267% ( 41) 00:08:24.978 14115.446 - 14216.271: 97.8266% ( 17) 00:08:24.978 14216.271 - 14317.095: 97.9323% ( 18) 00:08:24.978 14317.095 - 14417.920: 98.0557% ( 21) 00:08:24.978 14417.920 - 14518.745: 98.1027% ( 8) 00:08:24.978 14518.745 - 14619.569: 98.1908% ( 15) 00:08:24.978 14619.569 - 14720.394: 98.2906% ( 17) 00:08:24.978 14720.394 - 14821.218: 98.4786% ( 32) 00:08:24.978 14821.218 - 14922.043: 98.7018% ( 38) 00:08:24.978 14922.043 - 15022.868: 98.8369% ( 23) 00:08:24.978 15022.868 - 15123.692: 98.9368% ( 17) 00:08:24.978 15123.692 - 15224.517: 98.9662% ( 5) 00:08:24.978 15224.517 - 15325.342: 98.9897% ( 4) 00:08:24.978 15325.342 - 15426.166: 99.0249% ( 6) 00:08:24.978 15426.166 - 15526.991: 99.2129% ( 32) 00:08:24.978 15526.991 - 15627.815: 99.2305% ( 3) 00:08:24.978 15627.815 - 15728.640: 99.2481% ( 3) 00:08:24.978 17341.834 - 17442.658: 99.2540% ( 1) 00:08:24.978 17442.658 - 17543.483: 99.2716% ( 3) 00:08:24.978 17543.483 - 17644.308: 99.2951% ( 4) 00:08:24.978 17644.308 - 17745.132: 99.3127% ( 3) 00:08:24.978 17745.132 - 17845.957: 99.3245% ( 2) 00:08:24.978 17845.957 - 17946.782: 99.3480% ( 4) 00:08:24.978 17946.782 - 18047.606: 99.3656% ( 3) 00:08:24.978 18047.606 - 18148.431: 99.3832% ( 3) 00:08:24.978 18148.431 - 18249.255: 99.4067% ( 4) 00:08:24.978 18249.255 - 18350.080: 99.4185% ( 2) 00:08:24.978 18350.080 - 18450.905: 99.4302% ( 2) 00:08:24.978 18450.905 - 18551.729: 99.4478% ( 3) 00:08:24.978 18551.729 - 18652.554: 99.4596% ( 2) 00:08:24.978 18652.554 - 18753.378: 99.4713% ( 2) 00:08:24.978 18753.378 - 18854.203: 99.4890% ( 3) 00:08:24.978 18854.203 - 18955.028: 99.5066% ( 3) 00:08:24.978 18955.028 - 19055.852: 99.5242% ( 3) 00:08:24.978 19055.852 - 19156.677: 99.5477% ( 4) 00:08:24.978 19156.677 - 19257.502: 99.5653% ( 3) 00:08:24.978 19257.502 - 19358.326: 99.5829% ( 3) 00:08:24.978 19358.326 - 19459.151: 99.6064% ( 4) 00:08:24.978 19459.151 - 19559.975: 99.6241% ( 3) 00:08:24.978 24702.031 - 24802.855: 99.6476% ( 4) 00:08:24.978 24802.855 - 24903.680: 99.6828% ( 6) 00:08:24.978 24903.680 - 25004.505: 99.7122% ( 5) 00:08:24.978 25004.505 - 25105.329: 99.7474% ( 6) 00:08:24.978 25105.329 - 25206.154: 99.7827% ( 6) 00:08:24.978 25206.154 - 25306.978: 99.8179% ( 6) 00:08:24.978 25306.978 - 25407.803: 99.8473% ( 5) 00:08:24.978 25407.803 - 25508.628: 99.8825% ( 6) 00:08:24.978 25508.628 - 25609.452: 99.9178% ( 6) 00:08:24.978 25609.452 - 25710.277: 99.9530% ( 6) 00:08:24.978 25710.277 - 25811.102: 99.9883% ( 6) 00:08:24.978 25811.102 - 26012.751: 100.0000% ( 2) 00:08:24.978 00:08:24.978 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:24.978 ============================================================================== 00:08:24.978 Range in us Cumulative IO count 00:08:24.978 5873.034 - 5898.240: 0.0059% ( 1) 00:08:24.978 5898.240 - 5923.446: 0.0117% ( 1) 00:08:24.978 5973.858 - 5999.065: 0.0235% ( 2) 00:08:24.978 5999.065 - 6024.271: 0.0411% ( 3) 00:08:24.978 6024.271 - 6049.477: 0.0587% ( 3) 00:08:24.978 6049.477 - 6074.683: 0.0881% ( 5) 00:08:24.978 6074.683 - 6099.889: 0.1292% ( 7) 00:08:24.978 6099.889 - 6125.095: 0.1997% ( 12) 00:08:24.978 6125.095 - 6150.302: 0.3231% ( 21) 00:08:24.978 6150.302 - 6175.508: 0.4875% ( 28) 00:08:24.978 6175.508 - 6200.714: 0.7460% ( 44) 00:08:24.978 6200.714 - 6225.920: 1.1219% ( 64) 00:08:24.978 6225.920 - 6251.126: 1.4215% ( 51) 00:08:24.979 6251.126 - 6276.332: 1.8856% ( 79) 00:08:24.979 6276.332 - 6301.538: 2.2674% ( 65) 00:08:24.979 6301.538 - 6326.745: 2.6492% ( 65) 00:08:24.979 6326.745 - 6351.951: 3.0604% ( 70) 00:08:24.979 6351.951 - 6377.157: 3.4657% ( 69) 00:08:24.979 6377.157 - 6402.363: 3.9532% ( 83) 00:08:24.979 6402.363 - 6427.569: 4.6053% ( 111) 00:08:24.979 6427.569 - 6452.775: 5.2867% ( 116) 00:08:24.979 6452.775 - 6503.188: 7.1605% ( 319) 00:08:24.979 6503.188 - 6553.600: 9.9977% ( 483) 00:08:24.979 6553.600 - 6604.012: 14.9965% ( 851) 00:08:24.979 6604.012 - 6654.425: 21.1701% ( 1051) 00:08:24.979 6654.425 - 6704.837: 28.3130% ( 1216) 00:08:24.979 6704.837 - 6755.249: 36.9361% ( 1468) 00:08:24.979 6755.249 - 6805.662: 45.5005% ( 1458) 00:08:24.979 6805.662 - 6856.074: 53.6184% ( 1382) 00:08:24.979 6856.074 - 6906.486: 60.3090% ( 1139) 00:08:24.979 6906.486 - 6956.898: 65.2079% ( 834) 00:08:24.979 6956.898 - 7007.311: 69.2845% ( 694) 00:08:24.979 7007.311 - 7057.723: 72.3156% ( 516) 00:08:24.979 7057.723 - 7108.135: 75.0764% ( 470) 00:08:24.979 7108.135 - 7158.548: 76.5331% ( 248) 00:08:24.979 7158.548 - 7208.960: 77.6375% ( 188) 00:08:24.979 7208.960 - 7259.372: 78.7007% ( 181) 00:08:24.979 7259.372 - 7309.785: 79.6699% ( 165) 00:08:24.979 7309.785 - 7360.197: 80.3630% ( 118) 00:08:24.979 7360.197 - 7410.609: 80.7918% ( 73) 00:08:24.979 7410.609 - 7461.022: 81.1266% ( 57) 00:08:24.979 7461.022 - 7511.434: 81.5378% ( 70) 00:08:24.979 7511.434 - 7561.846: 81.7904% ( 43) 00:08:24.979 7561.846 - 7612.258: 82.3132% ( 89) 00:08:24.979 7612.258 - 7662.671: 82.8066% ( 84) 00:08:24.979 7662.671 - 7713.083: 83.2237% ( 71) 00:08:24.979 7713.083 - 7763.495: 83.7054% ( 82) 00:08:24.979 7763.495 - 7813.908: 84.2575% ( 94) 00:08:24.979 7813.908 - 7864.320: 84.7274% ( 80) 00:08:24.979 7864.320 - 7914.732: 85.2561% ( 90) 00:08:24.979 7914.732 - 7965.145: 85.6732% ( 71) 00:08:24.979 7965.145 - 8015.557: 85.8670% ( 33) 00:08:24.979 8015.557 - 8065.969: 86.0550% ( 32) 00:08:24.979 8065.969 - 8116.382: 86.1842% ( 22) 00:08:24.979 8116.382 - 8166.794: 86.2899% ( 18) 00:08:24.979 8166.794 - 8217.206: 86.3839% ( 16) 00:08:24.979 8217.206 - 8267.618: 86.4838% ( 17) 00:08:24.979 8267.618 - 8318.031: 86.5602% ( 13) 00:08:24.979 8318.031 - 8368.443: 86.6600% ( 17) 00:08:24.979 8368.443 - 8418.855: 86.9831% ( 55) 00:08:24.979 8418.855 - 8469.268: 87.0712% ( 15) 00:08:24.979 8469.268 - 8519.680: 87.1476% ( 13) 00:08:24.979 8519.680 - 8570.092: 87.2474% ( 17) 00:08:24.979 8570.092 - 8620.505: 87.3238% ( 13) 00:08:24.979 8620.505 - 8670.917: 87.3884% ( 11) 00:08:24.979 8670.917 - 8721.329: 87.4295% ( 7) 00:08:24.979 8721.329 - 8771.742: 87.4706% ( 7) 00:08:24.979 8771.742 - 8822.154: 87.5235% ( 9) 00:08:24.979 8822.154 - 8872.566: 87.5940% ( 12) 00:08:24.979 8872.566 - 8922.978: 87.6997% ( 18) 00:08:24.979 8922.978 - 8973.391: 87.8877% ( 32) 00:08:24.979 8973.391 - 9023.803: 88.0110% ( 21) 00:08:24.979 9023.803 - 9074.215: 88.1168% ( 18) 00:08:24.979 9074.215 - 9124.628: 88.2578% ( 24) 00:08:24.979 9124.628 - 9175.040: 88.3635% ( 18) 00:08:24.979 9175.040 - 9225.452: 88.4692% ( 18) 00:08:24.979 9225.452 - 9275.865: 88.6396% ( 29) 00:08:24.979 9275.865 - 9326.277: 88.7512% ( 19) 00:08:24.979 9326.277 - 9376.689: 88.8510% ( 17) 00:08:24.979 9376.689 - 9427.102: 88.9333% ( 14) 00:08:24.979 9427.102 - 9477.514: 88.9861% ( 9) 00:08:24.979 9477.514 - 9527.926: 89.0449% ( 10) 00:08:24.979 9527.926 - 9578.338: 89.0977% ( 9) 00:08:24.979 9578.338 - 9628.751: 89.1447% ( 8) 00:08:24.979 9628.751 - 9679.163: 89.2094% ( 11) 00:08:24.979 9679.163 - 9729.575: 89.2681% ( 10) 00:08:24.979 9729.575 - 9779.988: 89.3562% ( 15) 00:08:24.979 9779.988 - 9830.400: 89.4384% ( 14) 00:08:24.979 9830.400 - 9880.812: 89.5324% ( 16) 00:08:24.979 9880.812 - 9931.225: 89.6499% ( 20) 00:08:24.979 9931.225 - 9981.637: 89.8966% ( 42) 00:08:24.979 9981.637 - 10032.049: 90.0023% ( 18) 00:08:24.979 10032.049 - 10082.462: 90.0728% ( 12) 00:08:24.979 10082.462 - 10132.874: 90.1492% ( 13) 00:08:24.979 10132.874 - 10183.286: 90.2256% ( 13) 00:08:24.979 10183.286 - 10233.698: 90.3078% ( 14) 00:08:24.979 10233.698 - 10284.111: 90.3842% ( 13) 00:08:24.979 10284.111 - 10334.523: 90.4840% ( 17) 00:08:24.979 10334.523 - 10384.935: 90.5839% ( 17) 00:08:24.979 10384.935 - 10435.348: 90.6661% ( 14) 00:08:24.979 10435.348 - 10485.760: 90.7719% ( 18) 00:08:24.979 10485.760 - 10536.172: 90.8541% ( 14) 00:08:24.979 10536.172 - 10586.585: 90.9128% ( 10) 00:08:24.979 10586.585 - 10636.997: 90.9892% ( 13) 00:08:24.979 10636.997 - 10687.409: 91.0656% ( 13) 00:08:24.979 10687.409 - 10737.822: 91.1595% ( 16) 00:08:24.979 10737.822 - 10788.234: 91.2418% ( 14) 00:08:24.979 10788.234 - 10838.646: 91.3534% ( 19) 00:08:24.979 10838.646 - 10889.058: 91.4650% ( 19) 00:08:24.979 10889.058 - 10939.471: 91.6060% ( 24) 00:08:24.979 10939.471 - 10989.883: 91.7352% ( 22) 00:08:24.979 10989.883 - 11040.295: 91.8527% ( 20) 00:08:24.979 11040.295 - 11090.708: 91.9760% ( 21) 00:08:24.979 11090.708 - 11141.120: 92.1111% ( 23) 00:08:24.979 11141.120 - 11191.532: 92.2462% ( 23) 00:08:24.979 11191.532 - 11241.945: 92.4166% ( 29) 00:08:24.979 11241.945 - 11292.357: 92.5634% ( 25) 00:08:24.979 11292.357 - 11342.769: 92.7397% ( 30) 00:08:24.979 11342.769 - 11393.182: 92.9159% ( 30) 00:08:24.979 11393.182 - 11443.594: 92.9981% ( 14) 00:08:24.979 11443.594 - 11494.006: 93.0980% ( 17) 00:08:24.979 11494.006 - 11544.418: 93.1861% ( 15) 00:08:24.979 11544.418 - 11594.831: 93.2448% ( 10) 00:08:24.979 11594.831 - 11645.243: 93.3212% ( 13) 00:08:24.979 11645.243 - 11695.655: 93.3858% ( 11) 00:08:24.979 11695.655 - 11746.068: 93.4915% ( 18) 00:08:24.979 11746.068 - 11796.480: 93.6325% ( 24) 00:08:24.979 11796.480 - 11846.892: 93.7265% ( 16) 00:08:24.979 11846.892 - 11897.305: 93.8205% ( 16) 00:08:24.979 11897.305 - 11947.717: 93.8910% ( 12) 00:08:24.979 11947.717 - 11998.129: 93.9732% ( 14) 00:08:24.979 11998.129 - 12048.542: 94.0672% ( 16) 00:08:24.979 12048.542 - 12098.954: 94.1494% ( 14) 00:08:24.979 12098.954 - 12149.366: 94.2493% ( 17) 00:08:24.979 12149.366 - 12199.778: 94.3315% ( 14) 00:08:24.979 12199.778 - 12250.191: 94.4255% ( 16) 00:08:24.979 12250.191 - 12300.603: 94.5312% ( 18) 00:08:24.979 12300.603 - 12351.015: 94.6370% ( 18) 00:08:24.979 12351.015 - 12401.428: 94.7368% ( 17) 00:08:24.979 12401.428 - 12451.840: 94.8250% ( 15) 00:08:24.979 12451.840 - 12502.252: 94.9072% ( 14) 00:08:24.979 12502.252 - 12552.665: 94.9894% ( 14) 00:08:24.979 12552.665 - 12603.077: 95.0482% ( 10) 00:08:24.979 12603.077 - 12653.489: 95.0952% ( 8) 00:08:24.979 12653.489 - 12703.902: 95.1304% ( 6) 00:08:24.979 12703.902 - 12754.314: 95.1598% ( 5) 00:08:24.979 12754.314 - 12804.726: 95.2009% ( 7) 00:08:24.979 12804.726 - 12855.138: 95.2420% ( 7) 00:08:24.979 12855.138 - 12905.551: 95.2831% ( 7) 00:08:24.979 12905.551 - 13006.375: 95.3654% ( 14) 00:08:24.979 13006.375 - 13107.200: 95.4770% ( 19) 00:08:24.979 13107.200 - 13208.025: 95.6767% ( 34) 00:08:24.979 13208.025 - 13308.849: 95.9352% ( 44) 00:08:24.979 13308.849 - 13409.674: 96.0703% ( 23) 00:08:24.979 13409.674 - 13510.498: 96.3581% ( 49) 00:08:24.979 13510.498 - 13611.323: 96.4638% ( 18) 00:08:24.979 13611.323 - 13712.148: 96.5637% ( 17) 00:08:24.979 13712.148 - 13812.972: 96.6342% ( 12) 00:08:24.979 13812.972 - 13913.797: 96.7164% ( 14) 00:08:24.979 13913.797 - 14014.622: 96.8163% ( 17) 00:08:24.979 14014.622 - 14115.446: 96.9279% ( 19) 00:08:24.979 14115.446 - 14216.271: 97.0982% ( 29) 00:08:24.979 14216.271 - 14317.095: 97.5094% ( 70) 00:08:24.979 14317.095 - 14417.920: 97.7032% ( 33) 00:08:24.979 14417.920 - 14518.745: 97.8501% ( 25) 00:08:24.979 14518.745 - 14619.569: 98.0439% ( 33) 00:08:24.979 14619.569 - 14720.394: 98.2848% ( 41) 00:08:24.979 14720.394 - 14821.218: 98.3788% ( 16) 00:08:24.979 14821.218 - 14922.043: 98.4610% ( 14) 00:08:24.979 14922.043 - 15022.868: 98.5139% ( 9) 00:08:24.979 15022.868 - 15123.692: 98.5726% ( 10) 00:08:24.979 15123.692 - 15224.517: 98.6313% ( 10) 00:08:24.979 15224.517 - 15325.342: 98.7899% ( 27) 00:08:24.979 15325.342 - 15426.166: 99.1718% ( 65) 00:08:24.979 15426.166 - 15526.991: 99.2246% ( 9) 00:08:24.979 15526.991 - 15627.815: 99.2481% ( 4) 00:08:24.979 18350.080 - 18450.905: 99.2599% ( 2) 00:08:24.979 18450.905 - 18551.729: 99.2892% ( 5) 00:08:24.979 18551.729 - 18652.554: 99.3069% ( 3) 00:08:24.979 18652.554 - 18753.378: 99.3304% ( 4) 00:08:24.979 18753.378 - 18854.203: 99.3539% ( 4) 00:08:24.979 18854.203 - 18955.028: 99.3832% ( 5) 00:08:24.979 18955.028 - 19055.852: 99.4126% ( 5) 00:08:24.979 19055.852 - 19156.677: 99.4302% ( 3) 00:08:24.979 19156.677 - 19257.502: 99.4537% ( 4) 00:08:24.979 19257.502 - 19358.326: 99.4772% ( 4) 00:08:24.979 19358.326 - 19459.151: 99.4948% ( 3) 00:08:24.979 19459.151 - 19559.975: 99.5125% ( 3) 00:08:24.979 19559.975 - 19660.800: 99.5301% ( 3) 00:08:24.979 19660.800 - 19761.625: 99.5477% ( 3) 00:08:24.979 19761.625 - 19862.449: 99.5594% ( 2) 00:08:24.979 19862.449 - 19963.274: 99.5771% ( 3) 00:08:24.979 19963.274 - 20064.098: 99.5888% ( 2) 00:08:24.979 20064.098 - 20164.923: 99.6064% ( 3) 00:08:24.979 20164.923 - 20265.748: 99.6241% ( 3) 00:08:24.979 23794.609 - 23895.434: 99.6358% ( 2) 00:08:24.979 23895.434 - 23996.258: 99.6711% ( 6) 00:08:24.979 23996.258 - 24097.083: 99.7063% ( 6) 00:08:24.979 24097.083 - 24197.908: 99.7474% ( 7) 00:08:24.979 24197.908 - 24298.732: 99.8708% ( 21) 00:08:24.980 24298.732 - 24399.557: 99.9060% ( 6) 00:08:24.980 24399.557 - 24500.382: 99.9413% ( 6) 00:08:24.980 24500.382 - 24601.206: 99.9471% ( 1) 00:08:24.980 24802.855 - 24903.680: 99.9530% ( 1) 00:08:24.980 24903.680 - 25004.505: 99.9824% ( 5) 00:08:24.980 25004.505 - 25105.329: 100.0000% ( 3) 00:08:24.980 00:08:24.980 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:24.980 ============================================================================== 00:08:24.980 Range in us Cumulative IO count 00:08:24.980 5822.622 - 5847.828: 0.0059% ( 1) 00:08:24.980 5923.446 - 5948.652: 0.0176% ( 2) 00:08:24.980 5973.858 - 5999.065: 0.0352% ( 3) 00:08:24.980 5999.065 - 6024.271: 0.0470% ( 2) 00:08:24.980 6024.271 - 6049.477: 0.0822% ( 6) 00:08:24.980 6049.477 - 6074.683: 0.1586% ( 13) 00:08:24.980 6074.683 - 6099.889: 0.2585% ( 17) 00:08:24.980 6099.889 - 6125.095: 0.3701% ( 19) 00:08:24.980 6125.095 - 6150.302: 0.5992% ( 39) 00:08:24.980 6150.302 - 6175.508: 0.7519% ( 26) 00:08:24.980 6175.508 - 6200.714: 1.1043% ( 60) 00:08:24.980 6200.714 - 6225.920: 1.3510% ( 42) 00:08:24.980 6225.920 - 6251.126: 1.6212% ( 46) 00:08:24.980 6251.126 - 6276.332: 2.0089% ( 66) 00:08:24.980 6276.332 - 6301.538: 2.5023% ( 84) 00:08:24.980 6301.538 - 6326.745: 2.7373% ( 40) 00:08:24.980 6326.745 - 6351.951: 3.1074% ( 63) 00:08:24.980 6351.951 - 6377.157: 3.4187% ( 53) 00:08:24.980 6377.157 - 6402.363: 3.8299% ( 70) 00:08:24.980 6402.363 - 6427.569: 4.3820% ( 94) 00:08:24.980 6427.569 - 6452.775: 5.0752% ( 118) 00:08:24.980 6452.775 - 6503.188: 6.9843% ( 325) 00:08:24.980 6503.188 - 6553.600: 9.6805% ( 459) 00:08:24.980 6553.600 - 6604.012: 13.8099% ( 703) 00:08:24.980 6604.012 - 6654.425: 20.3301% ( 1110) 00:08:24.980 6654.425 - 6704.837: 28.4833% ( 1388) 00:08:24.980 6704.837 - 6755.249: 36.9302% ( 1438) 00:08:24.980 6755.249 - 6805.662: 45.2479% ( 1416) 00:08:24.980 6805.662 - 6856.074: 53.1485% ( 1345) 00:08:24.980 6856.074 - 6906.486: 59.5982% ( 1098) 00:08:24.980 6906.486 - 6956.898: 64.5148% ( 837) 00:08:24.980 6956.898 - 7007.311: 69.0437% ( 771) 00:08:24.980 7007.311 - 7057.723: 72.6562% ( 615) 00:08:24.980 7057.723 - 7108.135: 75.4464% ( 475) 00:08:24.980 7108.135 - 7158.548: 77.0383% ( 271) 00:08:24.980 7158.548 - 7208.960: 78.2249% ( 202) 00:08:24.980 7208.960 - 7259.372: 79.3586% ( 193) 00:08:24.980 7259.372 - 7309.785: 80.1750% ( 139) 00:08:24.980 7309.785 - 7360.197: 80.6273% ( 77) 00:08:24.980 7360.197 - 7410.609: 81.0033% ( 64) 00:08:24.980 7410.609 - 7461.022: 81.4791% ( 81) 00:08:24.980 7461.022 - 7511.434: 82.0959% ( 105) 00:08:24.980 7511.434 - 7561.846: 82.6304% ( 91) 00:08:24.980 7561.846 - 7612.258: 83.0710% ( 75) 00:08:24.980 7612.258 - 7662.671: 83.3999% ( 56) 00:08:24.980 7662.671 - 7713.083: 83.7993% ( 68) 00:08:24.980 7713.083 - 7763.495: 84.3398% ( 92) 00:08:24.980 7763.495 - 7813.908: 84.6158% ( 47) 00:08:24.980 7813.908 - 7864.320: 85.0211% ( 69) 00:08:24.980 7864.320 - 7914.732: 85.5557% ( 91) 00:08:24.980 7914.732 - 7965.145: 85.9140% ( 61) 00:08:24.980 7965.145 - 8015.557: 86.2723% ( 61) 00:08:24.980 8015.557 - 8065.969: 86.4250% ( 26) 00:08:24.980 8065.969 - 8116.382: 86.6306% ( 35) 00:08:24.980 8116.382 - 8166.794: 86.7540% ( 21) 00:08:24.980 8166.794 - 8217.206: 86.8539% ( 17) 00:08:24.980 8217.206 - 8267.618: 86.9772% ( 21) 00:08:24.980 8267.618 - 8318.031: 87.1006% ( 21) 00:08:24.980 8318.031 - 8368.443: 87.2298% ( 22) 00:08:24.980 8368.443 - 8418.855: 87.4236% ( 33) 00:08:24.980 8418.855 - 8469.268: 87.6175% ( 33) 00:08:24.980 8469.268 - 8519.680: 87.7702% ( 26) 00:08:24.980 8519.680 - 8570.092: 87.9699% ( 34) 00:08:24.980 8570.092 - 8620.505: 88.0698% ( 17) 00:08:24.980 8620.505 - 8670.917: 88.1579% ( 15) 00:08:24.980 8670.917 - 8721.329: 88.2578% ( 17) 00:08:24.980 8721.329 - 8771.742: 88.3870% ( 22) 00:08:24.980 8771.742 - 8822.154: 88.6102% ( 38) 00:08:24.980 8822.154 - 8872.566: 88.7277% ( 20) 00:08:24.980 8872.566 - 8922.978: 88.8040% ( 13) 00:08:24.980 8922.978 - 8973.391: 88.8687% ( 11) 00:08:24.980 8973.391 - 9023.803: 88.9274% ( 10) 00:08:24.980 9023.803 - 9074.215: 88.9803% ( 9) 00:08:24.980 9074.215 - 9124.628: 89.0331% ( 9) 00:08:24.980 9124.628 - 9175.040: 89.1036% ( 12) 00:08:24.980 9175.040 - 9225.452: 89.1859% ( 14) 00:08:24.980 9225.452 - 9275.865: 89.2446% ( 10) 00:08:24.980 9275.865 - 9326.277: 89.2975% ( 9) 00:08:24.980 9326.277 - 9376.689: 89.3738% ( 13) 00:08:24.980 9376.689 - 9427.102: 89.4208% ( 8) 00:08:24.980 9427.102 - 9477.514: 89.4737% ( 9) 00:08:24.980 9477.514 - 9527.926: 89.5794% ( 18) 00:08:24.980 9527.926 - 9578.338: 89.6205% ( 7) 00:08:24.980 9578.338 - 9628.751: 89.6617% ( 7) 00:08:24.980 9628.751 - 9679.163: 89.6910% ( 5) 00:08:24.980 9679.163 - 9729.575: 89.6969% ( 1) 00:08:24.980 9729.575 - 9779.988: 89.7086% ( 2) 00:08:24.980 9779.988 - 9830.400: 89.7204% ( 2) 00:08:24.980 9880.812 - 9931.225: 89.7439% ( 4) 00:08:24.980 9931.225 - 9981.637: 89.7615% ( 3) 00:08:24.980 9981.637 - 10032.049: 89.7909% ( 5) 00:08:24.980 10032.049 - 10082.462: 89.8144% ( 4) 00:08:24.980 10082.462 - 10132.874: 89.8438% ( 5) 00:08:24.980 10132.874 - 10183.286: 89.8614% ( 3) 00:08:24.980 10183.286 - 10233.698: 89.9201% ( 10) 00:08:24.980 10233.698 - 10284.111: 89.9612% ( 7) 00:08:24.980 10284.111 - 10334.523: 90.0082% ( 8) 00:08:24.980 10334.523 - 10384.935: 90.1903% ( 31) 00:08:24.980 10384.935 - 10435.348: 90.2491% ( 10) 00:08:24.980 10435.348 - 10485.760: 90.3372% ( 15) 00:08:24.980 10485.760 - 10536.172: 90.4781% ( 24) 00:08:24.980 10536.172 - 10586.585: 90.5721% ( 16) 00:08:24.980 10586.585 - 10636.997: 90.7190% ( 25) 00:08:24.980 10636.997 - 10687.409: 90.8835% ( 28) 00:08:24.980 10687.409 - 10737.822: 91.0362% ( 26) 00:08:24.980 10737.822 - 10788.234: 91.1419% ( 18) 00:08:24.980 10788.234 - 10838.646: 91.1830% ( 7) 00:08:24.980 10838.646 - 10889.058: 91.2594% ( 13) 00:08:24.980 10889.058 - 10939.471: 91.3299% ( 12) 00:08:24.980 10939.471 - 10989.883: 91.3828% ( 9) 00:08:24.980 10989.883 - 11040.295: 91.4415% ( 10) 00:08:24.980 11040.295 - 11090.708: 91.5120% ( 12) 00:08:24.980 11090.708 - 11141.120: 91.6412% ( 22) 00:08:24.980 11141.120 - 11191.532: 91.8057% ( 28) 00:08:24.980 11191.532 - 11241.945: 91.9232% ( 20) 00:08:24.980 11241.945 - 11292.357: 92.0524% ( 22) 00:08:24.980 11292.357 - 11342.769: 92.1934% ( 24) 00:08:24.980 11342.769 - 11393.182: 92.3344% ( 24) 00:08:24.980 11393.182 - 11443.594: 92.4460% ( 19) 00:08:24.980 11443.594 - 11494.006: 92.5517% ( 18) 00:08:24.980 11494.006 - 11544.418: 92.6339% ( 14) 00:08:24.980 11544.418 - 11594.831: 92.7397% ( 18) 00:08:24.980 11594.831 - 11645.243: 92.8160% ( 13) 00:08:24.980 11645.243 - 11695.655: 92.9453% ( 22) 00:08:24.980 11695.655 - 11746.068: 93.0569% ( 19) 00:08:24.980 11746.068 - 11796.480: 93.1978% ( 24) 00:08:24.980 11796.480 - 11846.892: 93.3447% ( 25) 00:08:24.980 11846.892 - 11897.305: 93.5092% ( 28) 00:08:24.980 11897.305 - 11947.717: 93.6619% ( 26) 00:08:24.980 11947.717 - 11998.129: 93.8087% ( 25) 00:08:24.980 11998.129 - 12048.542: 93.9380% ( 22) 00:08:24.980 12048.542 - 12098.954: 94.0555% ( 20) 00:08:24.980 12098.954 - 12149.366: 94.1318% ( 13) 00:08:24.980 12149.366 - 12199.778: 94.1847% ( 9) 00:08:24.980 12199.778 - 12250.191: 94.2552% ( 12) 00:08:24.980 12250.191 - 12300.603: 94.3492% ( 16) 00:08:24.980 12300.603 - 12351.015: 94.4020% ( 9) 00:08:24.980 12351.015 - 12401.428: 94.4784% ( 13) 00:08:24.980 12401.428 - 12451.840: 94.5430% ( 11) 00:08:24.980 12451.840 - 12502.252: 94.6017% ( 10) 00:08:24.980 12502.252 - 12552.665: 94.6546% ( 9) 00:08:24.980 12552.665 - 12603.077: 94.7133% ( 10) 00:08:24.980 12603.077 - 12653.489: 94.7427% ( 5) 00:08:24.981 12653.489 - 12703.902: 94.7897% ( 8) 00:08:24.981 12703.902 - 12754.314: 94.8426% ( 9) 00:08:24.981 12754.314 - 12804.726: 94.8954% ( 9) 00:08:24.981 12804.726 - 12855.138: 94.9366% ( 7) 00:08:24.981 12855.138 - 12905.551: 95.0012% ( 11) 00:08:24.981 12905.551 - 13006.375: 95.3008% ( 51) 00:08:24.981 13006.375 - 13107.200: 95.5298% ( 39) 00:08:24.981 13107.200 - 13208.025: 95.7061% ( 30) 00:08:24.981 13208.025 - 13308.849: 95.8588% ( 26) 00:08:24.981 13308.849 - 13409.674: 96.0233% ( 28) 00:08:24.981 13409.674 - 13510.498: 96.2230% ( 34) 00:08:24.981 13510.498 - 13611.323: 96.5108% ( 49) 00:08:24.981 13611.323 - 13712.148: 96.5930% ( 14) 00:08:24.981 13712.148 - 13812.972: 96.7105% ( 20) 00:08:24.981 13812.972 - 13913.797: 96.7869% ( 13) 00:08:24.981 13913.797 - 14014.622: 96.9220% ( 23) 00:08:24.981 14014.622 - 14115.446: 97.1393% ( 37) 00:08:24.981 14115.446 - 14216.271: 97.3978% ( 44) 00:08:24.981 14216.271 - 14317.095: 97.5505% ( 26) 00:08:24.981 14317.095 - 14417.920: 97.7091% ( 27) 00:08:24.981 14417.920 - 14518.745: 97.8325% ( 21) 00:08:24.981 14518.745 - 14619.569: 97.9617% ( 22) 00:08:24.981 14619.569 - 14720.394: 98.0028% ( 7) 00:08:24.981 14720.394 - 14821.218: 98.1379% ( 23) 00:08:24.981 14821.218 - 14922.043: 98.2730% ( 23) 00:08:24.981 14922.043 - 15022.868: 98.5609% ( 49) 00:08:24.981 15022.868 - 15123.692: 98.7018% ( 24) 00:08:24.981 15123.692 - 15224.517: 98.8017% ( 17) 00:08:24.981 15224.517 - 15325.342: 98.9192% ( 20) 00:08:24.981 15325.342 - 15426.166: 99.1659% ( 42) 00:08:24.981 15426.166 - 15526.991: 99.2070% ( 7) 00:08:24.981 15526.991 - 15627.815: 99.2364% ( 5) 00:08:24.981 15627.815 - 15728.640: 99.2481% ( 2) 00:08:24.981 18148.431 - 18249.255: 99.2540% ( 1) 00:08:24.981 18249.255 - 18350.080: 99.2599% ( 1) 00:08:24.981 18551.729 - 18652.554: 99.2775% ( 3) 00:08:24.981 18652.554 - 18753.378: 99.3010% ( 4) 00:08:24.981 18753.378 - 18854.203: 99.3245% ( 4) 00:08:24.981 18854.203 - 18955.028: 99.3421% ( 3) 00:08:24.981 18955.028 - 19055.852: 99.3656% ( 4) 00:08:24.981 19055.852 - 19156.677: 99.3950% ( 5) 00:08:24.981 19156.677 - 19257.502: 99.4067% ( 2) 00:08:24.981 19257.502 - 19358.326: 99.4302% ( 4) 00:08:24.981 19358.326 - 19459.151: 99.4537% ( 4) 00:08:24.981 19459.151 - 19559.975: 99.4772% ( 4) 00:08:24.981 19559.975 - 19660.800: 99.5007% ( 4) 00:08:24.981 19660.800 - 19761.625: 99.5183% ( 3) 00:08:24.981 19761.625 - 19862.449: 99.5359% ( 3) 00:08:24.981 19862.449 - 19963.274: 99.5536% ( 3) 00:08:24.981 19963.274 - 20064.098: 99.5712% ( 3) 00:08:24.981 20064.098 - 20164.923: 99.5829% ( 2) 00:08:24.981 20164.923 - 20265.748: 99.6006% ( 3) 00:08:24.981 20265.748 - 20366.572: 99.6123% ( 2) 00:08:24.981 20366.572 - 20467.397: 99.6241% ( 2) 00:08:24.981 23088.837 - 23189.662: 99.6358% ( 2) 00:08:24.981 23189.662 - 23290.486: 99.6711% ( 6) 00:08:24.981 23290.486 - 23391.311: 99.7180% ( 8) 00:08:24.981 23391.311 - 23492.135: 99.8884% ( 29) 00:08:24.981 23492.135 - 23592.960: 99.9471% ( 10) 00:08:24.981 23592.960 - 23693.785: 99.9648% ( 3) 00:08:24.981 24097.083 - 24197.908: 99.9824% ( 3) 00:08:24.981 24197.908 - 24298.732: 100.0000% ( 3) 00:08:24.981 00:08:24.981 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:24.981 ============================================================================== 00:08:24.981 Range in us Cumulative IO count 00:08:24.981 5520.148 - 5545.354: 0.0059% ( 1) 00:08:24.981 5721.797 - 5747.003: 0.0117% ( 1) 00:08:24.981 5822.622 - 5847.828: 0.0176% ( 1) 00:08:24.981 5847.828 - 5873.034: 0.0294% ( 2) 00:08:24.981 5873.034 - 5898.240: 0.0470% ( 3) 00:08:24.981 5898.240 - 5923.446: 0.0529% ( 1) 00:08:24.981 5923.446 - 5948.652: 0.0646% ( 2) 00:08:24.981 5948.652 - 5973.858: 0.0881% ( 4) 00:08:24.981 5973.858 - 5999.065: 0.1175% ( 5) 00:08:24.981 5999.065 - 6024.271: 0.1527% ( 6) 00:08:24.981 6024.271 - 6049.477: 0.1997% ( 8) 00:08:24.981 6049.477 - 6074.683: 0.2643% ( 11) 00:08:24.981 6074.683 - 6099.889: 0.3524% ( 15) 00:08:24.981 6099.889 - 6125.095: 0.4288% ( 13) 00:08:24.981 6125.095 - 6150.302: 0.5522% ( 21) 00:08:24.981 6150.302 - 6175.508: 0.8517% ( 51) 00:08:24.981 6175.508 - 6200.714: 1.2570% ( 69) 00:08:24.981 6200.714 - 6225.920: 1.5449% ( 49) 00:08:24.981 6225.920 - 6251.126: 1.8445% ( 51) 00:08:24.981 6251.126 - 6276.332: 2.3085% ( 79) 00:08:24.981 6276.332 - 6301.538: 2.7314% ( 72) 00:08:24.981 6301.538 - 6326.745: 3.2131% ( 82) 00:08:24.981 6326.745 - 6351.951: 3.5127% ( 51) 00:08:24.981 6351.951 - 6377.157: 3.8710% ( 61) 00:08:24.981 6377.157 - 6402.363: 4.2352% ( 62) 00:08:24.981 6402.363 - 6427.569: 4.6581% ( 72) 00:08:24.981 6427.569 - 6452.775: 5.2220% ( 96) 00:08:24.981 6452.775 - 6503.188: 7.2721% ( 349) 00:08:24.981 6503.188 - 6553.600: 9.7979% ( 430) 00:08:24.981 6553.600 - 6604.012: 13.7982% ( 681) 00:08:24.981 6604.012 - 6654.425: 20.0012% ( 1056) 00:08:24.981 6654.425 - 6704.837: 27.6551% ( 1303) 00:08:24.981 6704.837 - 6755.249: 35.0388% ( 1257) 00:08:24.981 6755.249 - 6805.662: 44.5078% ( 1612) 00:08:24.981 6805.662 - 6856.074: 52.7608% ( 1405) 00:08:24.981 6856.074 - 6906.486: 59.3221% ( 1117) 00:08:24.981 6906.486 - 6956.898: 64.2387% ( 837) 00:08:24.981 6956.898 - 7007.311: 68.9673% ( 805) 00:08:24.981 7007.311 - 7057.723: 72.1158% ( 536) 00:08:24.981 7057.723 - 7108.135: 74.9471% ( 482) 00:08:24.981 7108.135 - 7158.548: 76.5214% ( 268) 00:08:24.981 7158.548 - 7208.960: 77.8313% ( 223) 00:08:24.981 7208.960 - 7259.372: 78.6067% ( 132) 00:08:24.981 7259.372 - 7309.785: 79.4408% ( 142) 00:08:24.981 7309.785 - 7360.197: 80.0987% ( 112) 00:08:24.981 7360.197 - 7410.609: 80.6097% ( 87) 00:08:24.981 7410.609 - 7461.022: 81.1266% ( 88) 00:08:24.981 7461.022 - 7511.434: 81.6201% ( 84) 00:08:24.981 7511.434 - 7561.846: 82.2838% ( 113) 00:08:24.981 7561.846 - 7612.258: 82.8184% ( 91) 00:08:24.981 7612.258 - 7662.671: 83.2942% ( 81) 00:08:24.981 7662.671 - 7713.083: 83.6290% ( 57) 00:08:24.981 7713.083 - 7763.495: 84.1165% ( 83) 00:08:24.981 7763.495 - 7813.908: 84.4337% ( 54) 00:08:24.981 7813.908 - 7864.320: 84.9565% ( 89) 00:08:24.981 7864.320 - 7914.732: 85.4265% ( 80) 00:08:24.981 7914.732 - 7965.145: 85.7965% ( 63) 00:08:24.981 7965.145 - 8015.557: 86.0256% ( 39) 00:08:24.981 8015.557 - 8065.969: 86.2312% ( 35) 00:08:24.981 8065.969 - 8116.382: 86.4544% ( 38) 00:08:24.981 8116.382 - 8166.794: 86.8245% ( 63) 00:08:24.981 8166.794 - 8217.206: 87.1123% ( 49) 00:08:24.981 8217.206 - 8267.618: 87.3473% ( 40) 00:08:24.981 8267.618 - 8318.031: 87.7702% ( 72) 00:08:24.981 8318.031 - 8368.443: 88.0404% ( 46) 00:08:24.981 8368.443 - 8418.855: 88.2343% ( 33) 00:08:24.981 8418.855 - 8469.268: 88.4633% ( 39) 00:08:24.981 8469.268 - 8519.680: 88.6513% ( 32) 00:08:24.981 8519.680 - 8570.092: 88.8569% ( 35) 00:08:24.981 8570.092 - 8620.505: 89.0390% ( 31) 00:08:24.981 8620.505 - 8670.917: 89.1447% ( 18) 00:08:24.981 8670.917 - 8721.329: 89.1976% ( 9) 00:08:24.981 8721.329 - 8771.742: 89.2387% ( 7) 00:08:24.981 8771.742 - 8822.154: 89.2740% ( 6) 00:08:24.981 8822.154 - 8872.566: 89.3151% ( 7) 00:08:24.981 8872.566 - 8922.978: 89.3562% ( 7) 00:08:24.981 8922.978 - 8973.391: 89.3973% ( 7) 00:08:24.981 8973.391 - 9023.803: 89.4326% ( 6) 00:08:24.981 9023.803 - 9074.215: 89.4561% ( 4) 00:08:24.981 9074.215 - 9124.628: 89.4737% ( 3) 00:08:24.981 9124.628 - 9175.040: 89.4854% ( 2) 00:08:24.981 9175.040 - 9225.452: 89.5089% ( 4) 00:08:24.981 9225.452 - 9275.865: 89.5324% ( 4) 00:08:24.981 9275.865 - 9326.277: 89.5500% ( 3) 00:08:24.981 9326.277 - 9376.689: 89.5677% ( 3) 00:08:24.981 9376.689 - 9427.102: 89.5912% ( 4) 00:08:24.981 9427.102 - 9477.514: 89.7498% ( 27) 00:08:24.981 9477.514 - 9527.926: 89.7733% ( 4) 00:08:24.981 9527.926 - 9578.338: 89.7850% ( 2) 00:08:24.981 9578.338 - 9628.751: 89.7968% ( 2) 00:08:24.981 9628.751 - 9679.163: 89.8085% ( 2) 00:08:24.981 9679.163 - 9729.575: 89.8614% ( 9) 00:08:24.981 9729.575 - 9779.988: 89.9201% ( 10) 00:08:24.981 9779.988 - 9830.400: 89.9965% ( 13) 00:08:24.981 9830.400 - 9880.812: 90.0376% ( 7) 00:08:24.981 9880.812 - 9931.225: 90.1022% ( 11) 00:08:24.981 9931.225 - 9981.637: 90.1844% ( 14) 00:08:24.981 9981.637 - 10032.049: 90.3019% ( 20) 00:08:24.981 10032.049 - 10082.462: 90.3607% ( 10) 00:08:24.981 10082.462 - 10132.874: 90.4018% ( 7) 00:08:24.981 10132.874 - 10183.286: 90.4429% ( 7) 00:08:24.981 10183.286 - 10233.698: 90.5251% ( 14) 00:08:24.981 10233.698 - 10284.111: 90.5839% ( 10) 00:08:24.981 10284.111 - 10334.523: 90.6485% ( 11) 00:08:24.981 10334.523 - 10384.935: 90.6837% ( 6) 00:08:24.981 10384.935 - 10435.348: 90.7131% ( 5) 00:08:24.981 10435.348 - 10485.760: 90.7484% ( 6) 00:08:24.981 10485.760 - 10536.172: 90.7777% ( 5) 00:08:24.981 10536.172 - 10586.585: 90.8306% ( 9) 00:08:24.981 10586.585 - 10636.997: 90.9363% ( 18) 00:08:24.981 10636.997 - 10687.409: 91.0186% ( 14) 00:08:24.981 10687.409 - 10737.822: 91.1184% ( 17) 00:08:24.981 10737.822 - 10788.234: 91.3299% ( 36) 00:08:24.981 10788.234 - 10838.646: 91.3828% ( 9) 00:08:24.981 10838.646 - 10889.058: 91.4297% ( 8) 00:08:24.981 10889.058 - 10939.471: 91.4826% ( 9) 00:08:24.981 10939.471 - 10989.883: 91.5355% ( 9) 00:08:24.981 10989.883 - 11040.295: 91.5942% ( 10) 00:08:24.981 11040.295 - 11090.708: 91.6412% ( 8) 00:08:24.981 11090.708 - 11141.120: 91.6823% ( 7) 00:08:24.981 11141.120 - 11191.532: 91.7704% ( 15) 00:08:24.982 11191.532 - 11241.945: 91.8409% ( 12) 00:08:24.982 11241.945 - 11292.357: 91.9525% ( 19) 00:08:24.982 11292.357 - 11342.769: 92.0465% ( 16) 00:08:24.982 11342.769 - 11393.182: 92.1288% ( 14) 00:08:24.982 11393.182 - 11443.594: 92.2051% ( 13) 00:08:24.982 11443.594 - 11494.006: 92.2815% ( 13) 00:08:24.982 11494.006 - 11544.418: 92.3637% ( 14) 00:08:24.982 11544.418 - 11594.831: 92.4048% ( 7) 00:08:24.982 11594.831 - 11645.243: 92.4401% ( 6) 00:08:24.982 11645.243 - 11695.655: 92.4812% ( 7) 00:08:24.982 11695.655 - 11746.068: 92.5047% ( 4) 00:08:24.982 11746.068 - 11796.480: 92.5458% ( 7) 00:08:24.982 11796.480 - 11846.892: 92.6104% ( 11) 00:08:24.982 11846.892 - 11897.305: 92.6985% ( 15) 00:08:24.982 11897.305 - 11947.717: 92.7690% ( 12) 00:08:24.982 11947.717 - 11998.129: 92.8689% ( 17) 00:08:24.982 11998.129 - 12048.542: 92.9805% ( 19) 00:08:24.982 12048.542 - 12098.954: 93.0804% ( 17) 00:08:24.982 12098.954 - 12149.366: 93.1508% ( 12) 00:08:24.982 12149.366 - 12199.778: 93.1920% ( 7) 00:08:24.982 12199.778 - 12250.191: 93.2390% ( 8) 00:08:24.982 12250.191 - 12300.603: 93.2918% ( 9) 00:08:24.982 12300.603 - 12351.015: 93.3388% ( 8) 00:08:24.982 12351.015 - 12401.428: 93.3858% ( 8) 00:08:24.982 12401.428 - 12451.840: 93.4504% ( 11) 00:08:24.982 12451.840 - 12502.252: 93.4915% ( 7) 00:08:24.982 12502.252 - 12552.665: 93.5738% ( 14) 00:08:24.982 12552.665 - 12603.077: 93.6854% ( 19) 00:08:24.982 12603.077 - 12653.489: 93.7911% ( 18) 00:08:24.982 12653.489 - 12703.902: 93.9497% ( 27) 00:08:24.982 12703.902 - 12754.314: 94.1553% ( 35) 00:08:24.982 12754.314 - 12804.726: 94.4549% ( 51) 00:08:24.982 12804.726 - 12855.138: 94.7721% ( 54) 00:08:24.982 12855.138 - 12905.551: 95.0364% ( 45) 00:08:24.982 12905.551 - 13006.375: 95.4652% ( 73) 00:08:24.982 13006.375 - 13107.200: 95.7119% ( 42) 00:08:24.982 13107.200 - 13208.025: 95.9175% ( 35) 00:08:24.982 13208.025 - 13308.849: 96.1995% ( 48) 00:08:24.982 13308.849 - 13409.674: 96.4638% ( 45) 00:08:24.982 13409.674 - 13510.498: 96.6400% ( 30) 00:08:24.982 13510.498 - 13611.323: 96.7693% ( 22) 00:08:24.982 13611.323 - 13712.148: 96.8574% ( 15) 00:08:24.982 13712.148 - 13812.972: 96.9572% ( 17) 00:08:24.982 13812.972 - 13913.797: 97.0923% ( 23) 00:08:24.982 13913.797 - 14014.622: 97.2686% ( 30) 00:08:24.982 14014.622 - 14115.446: 97.5388% ( 46) 00:08:24.982 14115.446 - 14216.271: 97.8912% ( 60) 00:08:24.982 14216.271 - 14317.095: 98.0322% ( 24) 00:08:24.982 14317.095 - 14417.920: 98.2495% ( 37) 00:08:24.982 14417.920 - 14518.745: 98.3788% ( 22) 00:08:24.982 14518.745 - 14619.569: 98.5021% ( 21) 00:08:24.982 14619.569 - 14720.394: 98.5550% ( 9) 00:08:24.982 14720.394 - 14821.218: 98.6255% ( 12) 00:08:24.982 14821.218 - 14922.043: 98.6783% ( 9) 00:08:24.982 14922.043 - 15022.868: 98.7430% ( 11) 00:08:24.982 15022.868 - 15123.692: 98.8134% ( 12) 00:08:24.982 15123.692 - 15224.517: 98.9544% ( 24) 00:08:24.982 15224.517 - 15325.342: 99.0719% ( 20) 00:08:24.982 15325.342 - 15426.166: 99.1306% ( 10) 00:08:24.982 15426.166 - 15526.991: 99.1776% ( 8) 00:08:24.982 15526.991 - 15627.815: 99.2188% ( 7) 00:08:24.982 15627.815 - 15728.640: 99.2481% ( 5) 00:08:24.982 18450.905 - 18551.729: 99.2540% ( 1) 00:08:24.982 18551.729 - 18652.554: 99.2775% ( 4) 00:08:24.982 18652.554 - 18753.378: 99.2951% ( 3) 00:08:24.982 18753.378 - 18854.203: 99.3069% ( 2) 00:08:24.982 18854.203 - 18955.028: 99.3245% ( 3) 00:08:24.982 18955.028 - 19055.852: 99.3480% ( 4) 00:08:24.982 19055.852 - 19156.677: 99.3656% ( 3) 00:08:24.982 19156.677 - 19257.502: 99.3950% ( 5) 00:08:24.982 19257.502 - 19358.326: 99.4361% ( 7) 00:08:24.982 19358.326 - 19459.151: 99.4655% ( 5) 00:08:24.982 19459.151 - 19559.975: 99.4831% ( 3) 00:08:24.982 19559.975 - 19660.800: 99.5066% ( 4) 00:08:24.982 19660.800 - 19761.625: 99.5359% ( 5) 00:08:24.982 19761.625 - 19862.449: 99.5653% ( 5) 00:08:24.982 19862.449 - 19963.274: 99.5888% ( 4) 00:08:24.982 19963.274 - 20064.098: 99.6241% ( 6) 00:08:24.982 22483.889 - 22584.714: 99.6476% ( 4) 00:08:24.982 22584.714 - 22685.538: 99.7885% ( 24) 00:08:24.982 22685.538 - 22786.363: 99.8590% ( 12) 00:08:24.982 22988.012 - 23088.837: 99.8649% ( 1) 00:08:24.982 23088.837 - 23189.662: 99.8708% ( 1) 00:08:24.982 23189.662 - 23290.486: 99.8943% ( 4) 00:08:24.982 23290.486 - 23391.311: 99.9295% ( 6) 00:08:24.982 23391.311 - 23492.135: 99.9648% ( 6) 00:08:24.982 23492.135 - 23592.960: 99.9824% ( 3) 00:08:24.982 23592.960 - 23693.785: 100.0000% ( 3) 00:08:24.982 00:08:24.982 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:24.982 ============================================================================== 00:08:24.982 Range in us Cumulative IO count 00:08:24.982 4713.551 - 4738.757: 0.0059% ( 1) 00:08:24.982 4738.757 - 4763.963: 0.0176% ( 2) 00:08:24.982 4763.963 - 4789.169: 0.0235% ( 1) 00:08:24.982 4789.169 - 4814.375: 0.0352% ( 2) 00:08:24.982 4839.582 - 4864.788: 0.0470% ( 2) 00:08:24.982 4864.788 - 4889.994: 0.0529% ( 1) 00:08:24.982 4889.994 - 4915.200: 0.0646% ( 2) 00:08:24.982 4915.200 - 4940.406: 0.0764% ( 2) 00:08:24.982 4940.406 - 4965.612: 0.0822% ( 1) 00:08:24.982 4965.612 - 4990.818: 0.0940% ( 2) 00:08:24.982 4990.818 - 5016.025: 0.0999% ( 1) 00:08:24.982 5016.025 - 5041.231: 0.1116% ( 2) 00:08:24.982 5041.231 - 5066.437: 0.1234% ( 2) 00:08:24.982 5066.437 - 5091.643: 0.1292% ( 1) 00:08:24.982 5091.643 - 5116.849: 0.1410% ( 2) 00:08:24.982 5116.849 - 5142.055: 0.1469% ( 1) 00:08:24.982 5142.055 - 5167.262: 0.1586% ( 2) 00:08:24.982 5167.262 - 5192.468: 0.1645% ( 1) 00:08:24.982 5192.468 - 5217.674: 0.1762% ( 2) 00:08:24.982 5217.674 - 5242.880: 0.1880% ( 2) 00:08:24.982 5242.880 - 5268.086: 0.1938% ( 1) 00:08:24.982 5268.086 - 5293.292: 0.2056% ( 2) 00:08:24.982 5293.292 - 5318.498: 0.2232% ( 3) 00:08:24.982 5318.498 - 5343.705: 0.2467% ( 4) 00:08:24.982 5343.705 - 5368.911: 0.2643% ( 3) 00:08:24.982 5368.911 - 5394.117: 0.2702% ( 1) 00:08:24.982 5394.117 - 5419.323: 0.2761% ( 1) 00:08:24.982 5419.323 - 5444.529: 0.2820% ( 1) 00:08:24.982 5444.529 - 5469.735: 0.2878% ( 1) 00:08:24.982 5494.942 - 5520.148: 0.2937% ( 1) 00:08:24.982 5520.148 - 5545.354: 0.2996% ( 1) 00:08:24.982 5545.354 - 5570.560: 0.3055% ( 1) 00:08:24.982 5570.560 - 5595.766: 0.3113% ( 1) 00:08:24.982 5595.766 - 5620.972: 0.3172% ( 1) 00:08:24.982 5620.972 - 5646.178: 0.3231% ( 1) 00:08:24.982 5646.178 - 5671.385: 0.3289% ( 1) 00:08:24.982 5671.385 - 5696.591: 0.3348% ( 1) 00:08:24.982 5721.797 - 5747.003: 0.3407% ( 1) 00:08:24.982 5747.003 - 5772.209: 0.3466% ( 1) 00:08:24.982 5772.209 - 5797.415: 0.3583% ( 2) 00:08:24.982 5797.415 - 5822.622: 0.3642% ( 1) 00:08:24.982 5822.622 - 5847.828: 0.3701% ( 1) 00:08:24.982 5847.828 - 5873.034: 0.3759% ( 1) 00:08:24.982 5873.034 - 5898.240: 0.3818% ( 1) 00:08:24.982 5923.446 - 5948.652: 0.3877% ( 1) 00:08:24.982 5948.652 - 5973.858: 0.3936% ( 1) 00:08:24.982 5973.858 - 5999.065: 0.4053% ( 2) 00:08:24.982 5999.065 - 6024.271: 0.4406% ( 6) 00:08:24.982 6024.271 - 6049.477: 0.4582% ( 3) 00:08:24.982 6049.477 - 6074.683: 0.5404% ( 14) 00:08:24.982 6074.683 - 6099.889: 0.6227% ( 14) 00:08:24.982 6099.889 - 6125.095: 0.7343% ( 19) 00:08:24.982 6125.095 - 6150.302: 0.8929% ( 27) 00:08:24.982 6150.302 - 6175.508: 1.1337% ( 41) 00:08:24.982 6175.508 - 6200.714: 1.4568% ( 55) 00:08:24.982 6200.714 - 6225.920: 1.6741% ( 37) 00:08:24.982 6225.920 - 6251.126: 2.1205% ( 76) 00:08:24.982 6251.126 - 6276.332: 2.3555% ( 40) 00:08:24.982 6276.332 - 6301.538: 2.8900% ( 91) 00:08:24.982 6301.538 - 6326.745: 3.1779% ( 49) 00:08:24.982 6326.745 - 6351.951: 3.4246% ( 42) 00:08:24.982 6351.951 - 6377.157: 3.7242% ( 51) 00:08:24.982 6377.157 - 6402.363: 4.2352% ( 87) 00:08:24.982 6402.363 - 6427.569: 4.7991% ( 96) 00:08:24.982 6427.569 - 6452.775: 5.4453% ( 110) 00:08:24.982 6452.775 - 6503.188: 7.2545% ( 308) 00:08:24.982 6503.188 - 6553.600: 9.8332% ( 439) 00:08:24.982 6553.600 - 6604.012: 13.9979% ( 709) 00:08:24.982 6604.012 - 6654.425: 19.6957% ( 970) 00:08:24.982 6654.425 - 6704.837: 27.1029% ( 1261) 00:08:24.982 6704.837 - 6755.249: 34.8273% ( 1315) 00:08:24.982 6755.249 - 6805.662: 44.6370% ( 1670) 00:08:24.982 6805.662 - 6856.074: 52.6257% ( 1360) 00:08:24.982 6856.074 - 6906.486: 59.4984% ( 1170) 00:08:24.982 6906.486 - 6956.898: 64.8203% ( 906) 00:08:24.982 6956.898 - 7007.311: 69.1671% ( 740) 00:08:24.982 7007.311 - 7057.723: 72.9206% ( 639) 00:08:24.982 7057.723 - 7108.135: 75.2702% ( 400) 00:08:24.982 7108.135 - 7158.548: 76.8092% ( 262) 00:08:24.982 7158.548 - 7208.960: 77.8842% ( 183) 00:08:24.982 7208.960 - 7259.372: 78.6889% ( 137) 00:08:24.982 7259.372 - 7309.785: 79.7227% ( 176) 00:08:24.982 7309.785 - 7360.197: 80.1927% ( 80) 00:08:24.982 7360.197 - 7410.609: 80.8741% ( 116) 00:08:24.982 7410.609 - 7461.022: 81.5378% ( 113) 00:08:24.982 7461.022 - 7511.434: 81.7787% ( 41) 00:08:24.982 7511.434 - 7561.846: 82.2075% ( 73) 00:08:24.982 7561.846 - 7612.258: 82.5070% ( 51) 00:08:24.982 7612.258 - 7662.671: 82.9417% ( 74) 00:08:24.982 7662.671 - 7713.083: 83.3588% ( 71) 00:08:24.982 7713.083 - 7763.495: 83.6936% ( 57) 00:08:24.982 7763.495 - 7813.908: 83.9521% ( 44) 00:08:24.982 7813.908 - 7864.320: 84.3515% ( 68) 00:08:24.982 7864.320 - 7914.732: 84.8273% ( 81) 00:08:24.982 7914.732 - 7965.145: 84.9800% ( 26) 00:08:24.982 7965.145 - 8015.557: 85.2914% ( 53) 00:08:24.983 8015.557 - 8065.969: 85.4500% ( 27) 00:08:24.983 8065.969 - 8116.382: 85.6144% ( 28) 00:08:24.983 8116.382 - 8166.794: 85.7613% ( 25) 00:08:24.983 8166.794 - 8217.206: 85.9199% ( 27) 00:08:24.983 8217.206 - 8267.618: 86.1548% ( 40) 00:08:24.983 8267.618 - 8318.031: 86.4309% ( 47) 00:08:24.983 8318.031 - 8368.443: 86.6776% ( 42) 00:08:24.983 8368.443 - 8418.855: 86.8950% ( 37) 00:08:24.983 8418.855 - 8469.268: 87.1123% ( 37) 00:08:24.983 8469.268 - 8519.680: 87.3062% ( 33) 00:08:24.983 8519.680 - 8570.092: 87.5000% ( 33) 00:08:24.983 8570.092 - 8620.505: 87.8113% ( 53) 00:08:24.983 8620.505 - 8670.917: 87.9641% ( 26) 00:08:24.983 8670.917 - 8721.329: 88.0463% ( 14) 00:08:24.983 8721.329 - 8771.742: 88.1344% ( 15) 00:08:24.983 8771.742 - 8822.154: 88.2225% ( 15) 00:08:24.983 8822.154 - 8872.566: 88.2930% ( 12) 00:08:24.983 8872.566 - 8922.978: 88.3752% ( 14) 00:08:24.983 8922.978 - 8973.391: 88.4398% ( 11) 00:08:24.983 8973.391 - 9023.803: 88.5573% ( 20) 00:08:24.983 9023.803 - 9074.215: 88.7570% ( 34) 00:08:24.983 9074.215 - 9124.628: 88.9626% ( 35) 00:08:24.983 9124.628 - 9175.040: 89.1624% ( 34) 00:08:24.983 9175.040 - 9225.452: 89.4032% ( 41) 00:08:24.983 9225.452 - 9275.865: 89.6440% ( 41) 00:08:24.983 9275.865 - 9326.277: 89.8203% ( 30) 00:08:24.983 9326.277 - 9376.689: 89.9436% ( 21) 00:08:24.983 9376.689 - 9427.102: 90.0376% ( 16) 00:08:24.983 9427.102 - 9477.514: 90.2784% ( 41) 00:08:24.983 9477.514 - 9527.926: 90.3607% ( 14) 00:08:24.983 9527.926 - 9578.338: 90.4194% ( 10) 00:08:24.983 9578.338 - 9628.751: 90.4840% ( 11) 00:08:24.983 9628.751 - 9679.163: 90.5486% ( 11) 00:08:24.983 9679.163 - 9729.575: 90.6309% ( 14) 00:08:24.983 9729.575 - 9779.988: 90.7014% ( 12) 00:08:24.983 9779.988 - 9830.400: 90.7425% ( 7) 00:08:24.983 9830.400 - 9880.812: 90.7895% ( 8) 00:08:24.983 9880.812 - 9931.225: 90.8188% ( 5) 00:08:24.983 9931.225 - 9981.637: 90.8482% ( 5) 00:08:24.983 9981.637 - 10032.049: 90.8717% ( 4) 00:08:24.983 10032.049 - 10082.462: 90.9128% ( 7) 00:08:24.983 10082.462 - 10132.874: 90.9598% ( 8) 00:08:24.983 10132.874 - 10183.286: 91.0244% ( 11) 00:08:24.983 10183.286 - 10233.698: 91.0538% ( 5) 00:08:24.983 10233.698 - 10284.111: 91.0714% ( 3) 00:08:24.983 10284.111 - 10334.523: 91.1008% ( 5) 00:08:24.983 10334.523 - 10384.935: 91.1125% ( 2) 00:08:24.983 10384.935 - 10435.348: 91.1243% ( 2) 00:08:24.983 10435.348 - 10485.760: 91.1537% ( 5) 00:08:24.983 10485.760 - 10536.172: 91.1772% ( 4) 00:08:24.983 10536.172 - 10586.585: 91.2065% ( 5) 00:08:24.983 10586.585 - 10636.997: 91.2300% ( 4) 00:08:24.983 10636.997 - 10687.409: 91.2535% ( 4) 00:08:24.983 10687.409 - 10737.822: 91.2770% ( 4) 00:08:24.983 10737.822 - 10788.234: 91.2946% ( 3) 00:08:24.983 10788.234 - 10838.646: 91.3358% ( 7) 00:08:24.983 10838.646 - 10889.058: 91.4121% ( 13) 00:08:24.983 10889.058 - 10939.471: 91.4297% ( 3) 00:08:24.983 10939.471 - 10989.883: 91.4591% ( 5) 00:08:24.983 10989.883 - 11040.295: 91.4944% ( 6) 00:08:24.983 11040.295 - 11090.708: 91.5355% ( 7) 00:08:24.983 11090.708 - 11141.120: 91.5825% ( 8) 00:08:24.983 11141.120 - 11191.532: 91.6941% ( 19) 00:08:24.983 11191.532 - 11241.945: 91.8879% ( 33) 00:08:24.983 11241.945 - 11292.357: 91.9173% ( 5) 00:08:24.983 11292.357 - 11342.769: 91.9702% ( 9) 00:08:24.983 11342.769 - 11393.182: 92.0876% ( 20) 00:08:24.983 11393.182 - 11443.594: 92.1405% ( 9) 00:08:24.983 11443.594 - 11494.006: 92.1875% ( 8) 00:08:24.983 11494.006 - 11544.418: 92.2462% ( 10) 00:08:24.983 11544.418 - 11594.831: 92.2932% ( 8) 00:08:24.983 11594.831 - 11645.243: 92.3520% ( 10) 00:08:24.983 11645.243 - 11695.655: 92.4107% ( 10) 00:08:24.983 11695.655 - 11746.068: 92.4753% ( 11) 00:08:24.983 11746.068 - 11796.480: 92.5399% ( 11) 00:08:24.983 11796.480 - 11846.892: 92.5752% ( 6) 00:08:24.983 11846.892 - 11897.305: 92.6398% ( 11) 00:08:24.983 11897.305 - 11947.717: 92.7044% ( 11) 00:08:24.983 11947.717 - 11998.129: 92.8219% ( 20) 00:08:24.983 11998.129 - 12048.542: 92.9276% ( 18) 00:08:24.983 12048.542 - 12098.954: 93.0216% ( 16) 00:08:24.983 12098.954 - 12149.366: 93.1097% ( 15) 00:08:24.983 12149.366 - 12199.778: 93.2507% ( 24) 00:08:24.983 12199.778 - 12250.191: 93.3682% ( 20) 00:08:24.983 12250.191 - 12300.603: 93.4680% ( 17) 00:08:24.983 12300.603 - 12351.015: 93.5738% ( 18) 00:08:24.983 12351.015 - 12401.428: 93.6854% ( 19) 00:08:24.983 12401.428 - 12451.840: 93.7676% ( 14) 00:08:24.983 12451.840 - 12502.252: 93.8616% ( 16) 00:08:24.983 12502.252 - 12552.665: 93.9556% ( 16) 00:08:24.983 12552.665 - 12603.077: 94.0496% ( 16) 00:08:24.983 12603.077 - 12653.489: 94.1671% ( 20) 00:08:24.983 12653.489 - 12703.902: 94.2787% ( 19) 00:08:24.983 12703.902 - 12754.314: 94.3609% ( 14) 00:08:24.983 12754.314 - 12804.726: 94.4255% ( 11) 00:08:24.983 12804.726 - 12855.138: 94.5195% ( 16) 00:08:24.983 12855.138 - 12905.551: 94.6252% ( 18) 00:08:24.983 12905.551 - 13006.375: 94.8543% ( 39) 00:08:24.983 13006.375 - 13107.200: 95.1363% ( 48) 00:08:24.983 13107.200 - 13208.025: 95.4535% ( 54) 00:08:24.983 13208.025 - 13308.849: 95.8823% ( 73) 00:08:24.983 13308.849 - 13409.674: 96.4227% ( 92) 00:08:24.983 13409.674 - 13510.498: 96.7928% ( 63) 00:08:24.983 13510.498 - 13611.323: 97.1393% ( 59) 00:08:24.983 13611.323 - 13712.148: 97.4213% ( 48) 00:08:24.983 13712.148 - 13812.972: 97.6093% ( 32) 00:08:24.983 13812.972 - 13913.797: 97.8207% ( 36) 00:08:24.983 13913.797 - 14014.622: 98.0087% ( 32) 00:08:24.983 14014.622 - 14115.446: 98.1732% ( 28) 00:08:24.983 14115.446 - 14216.271: 98.2613% ( 15) 00:08:24.983 14216.271 - 14317.095: 98.3435% ( 14) 00:08:24.983 14317.095 - 14417.920: 98.4023% ( 10) 00:08:24.983 14417.920 - 14518.745: 98.4669% ( 11) 00:08:24.983 14518.745 - 14619.569: 98.5374% ( 12) 00:08:24.983 14619.569 - 14720.394: 98.6783% ( 24) 00:08:24.983 14720.394 - 14821.218: 98.8252% ( 25) 00:08:24.983 14821.218 - 14922.043: 98.9368% ( 19) 00:08:24.983 14922.043 - 15022.868: 99.0073% ( 12) 00:08:24.983 15022.868 - 15123.692: 99.0954% ( 15) 00:08:24.983 15123.692 - 15224.517: 99.1189% ( 4) 00:08:24.983 15224.517 - 15325.342: 99.1365% ( 3) 00:08:24.983 15325.342 - 15426.166: 99.1600% ( 4) 00:08:24.983 15426.166 - 15526.991: 99.1776% ( 3) 00:08:24.983 15526.991 - 15627.815: 99.2011% ( 4) 00:08:24.983 15627.815 - 15728.640: 99.2246% ( 4) 00:08:24.983 15728.640 - 15829.465: 99.2422% ( 3) 00:08:24.983 15829.465 - 15930.289: 99.2481% ( 1) 00:08:24.983 18047.606 - 18148.431: 99.2540% ( 1) 00:08:24.983 18148.431 - 18249.255: 99.2599% ( 1) 00:08:24.983 18450.905 - 18551.729: 99.2657% ( 1) 00:08:24.983 18551.729 - 18652.554: 99.3245% ( 10) 00:08:24.983 18652.554 - 18753.378: 99.3891% ( 11) 00:08:24.983 18753.378 - 18854.203: 99.4478% ( 10) 00:08:24.983 18854.203 - 18955.028: 99.4772% ( 5) 00:08:24.983 18955.028 - 19055.852: 99.5066% ( 5) 00:08:24.983 19055.852 - 19156.677: 99.5359% ( 5) 00:08:24.983 19156.677 - 19257.502: 99.5653% ( 5) 00:08:24.983 19257.502 - 19358.326: 99.5947% ( 5) 00:08:24.983 19358.326 - 19459.151: 99.6182% ( 4) 00:08:24.983 19459.151 - 19559.975: 99.6241% ( 1) 00:08:24.983 21778.117 - 21878.942: 99.6887% ( 11) 00:08:24.983 21878.942 - 21979.766: 99.9295% ( 41) 00:08:24.983 21979.766 - 22080.591: 99.9824% ( 9) 00:08:24.983 22080.591 - 22181.415: 99.9883% ( 1) 00:08:24.983 22584.714 - 22685.538: 99.9941% ( 1) 00:08:24.983 22685.538 - 22786.363: 100.0000% ( 1) 00:08:24.983 00:08:24.983 20:50:42 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:24.983 00:08:24.983 real 0m2.461s 00:08:24.983 user 0m2.171s 00:08:24.983 sys 0m0.183s 00:08:24.983 20:50:42 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.983 ************************************ 00:08:24.983 END TEST nvme_perf 00:08:24.983 ************************************ 00:08:24.983 20:50:42 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:24.983 20:50:42 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:24.983 20:50:42 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:24.983 20:50:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.983 20:50:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.983 ************************************ 00:08:24.983 START TEST nvme_hello_world 00:08:24.983 ************************************ 00:08:24.983 20:50:42 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:24.983 Initializing NVMe Controllers 00:08:24.983 Attached to 0000:00:10.0 00:08:24.983 Namespace ID: 1 size: 6GB 00:08:24.983 Attached to 0000:00:11.0 00:08:24.983 Namespace ID: 1 size: 5GB 00:08:24.983 Attached to 0000:00:13.0 00:08:24.983 Namespace ID: 1 size: 1GB 00:08:24.983 Attached to 0000:00:12.0 00:08:24.983 Namespace ID: 1 size: 4GB 00:08:24.983 Namespace ID: 2 size: 4GB 00:08:24.983 Namespace ID: 3 size: 4GB 00:08:24.983 Initialization complete. 00:08:24.983 INFO: using host memory buffer for IO 00:08:24.983 Hello world! 00:08:24.983 INFO: using host memory buffer for IO 00:08:24.983 Hello world! 00:08:24.983 INFO: using host memory buffer for IO 00:08:24.983 Hello world! 00:08:24.983 INFO: using host memory buffer for IO 00:08:24.983 Hello world! 00:08:24.983 INFO: using host memory buffer for IO 00:08:24.983 Hello world! 00:08:24.983 INFO: using host memory buffer for IO 00:08:24.983 Hello world! 00:08:24.983 00:08:24.983 real 0m0.184s 00:08:24.983 user 0m0.070s 00:08:24.983 sys 0m0.072s 00:08:24.984 20:50:42 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.984 20:50:42 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:24.984 ************************************ 00:08:24.984 END TEST nvme_hello_world 00:08:24.984 ************************************ 00:08:24.984 20:50:42 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:24.984 20:50:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.984 20:50:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.984 20:50:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.984 ************************************ 00:08:24.984 START TEST nvme_sgl 00:08:24.984 ************************************ 00:08:24.984 20:50:42 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:24.984 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:24.984 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:24.984 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:25.242 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:25.242 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:25.242 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:25.242 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:25.242 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:25.242 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:25.242 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:25.242 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:25.242 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:25.242 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:25.242 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:25.242 NVMe Readv/Writev Request test 00:08:25.242 Attached to 0000:00:10.0 00:08:25.242 Attached to 0000:00:11.0 00:08:25.242 Attached to 0000:00:13.0 00:08:25.242 Attached to 0000:00:12.0 00:08:25.242 0000:00:10.0: build_io_request_2 test passed 00:08:25.242 0000:00:10.0: build_io_request_4 test passed 00:08:25.242 0000:00:10.0: build_io_request_5 test passed 00:08:25.242 0000:00:10.0: build_io_request_6 test passed 00:08:25.242 0000:00:10.0: build_io_request_7 test passed 00:08:25.242 0000:00:10.0: build_io_request_10 test passed 00:08:25.242 0000:00:11.0: build_io_request_2 test passed 00:08:25.242 0000:00:11.0: build_io_request_4 test passed 00:08:25.242 0000:00:11.0: build_io_request_5 test passed 00:08:25.242 0000:00:11.0: build_io_request_6 test passed 00:08:25.242 0000:00:11.0: build_io_request_7 test passed 00:08:25.242 0000:00:11.0: build_io_request_10 test passed 00:08:25.242 Cleaning up... 00:08:25.242 00:08:25.242 real 0m0.251s 00:08:25.242 user 0m0.124s 00:08:25.242 sys 0m0.074s 00:08:25.242 20:50:43 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.242 20:50:43 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:25.242 ************************************ 00:08:25.242 END TEST nvme_sgl 00:08:25.242 ************************************ 00:08:25.242 20:50:43 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:25.242 20:50:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.242 20:50:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.242 20:50:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.242 ************************************ 00:08:25.242 START TEST nvme_e2edp 00:08:25.242 ************************************ 00:08:25.242 20:50:43 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:25.500 NVMe Write/Read with End-to-End data protection test 00:08:25.500 Attached to 0000:00:10.0 00:08:25.501 Attached to 0000:00:11.0 00:08:25.501 Attached to 0000:00:13.0 00:08:25.501 Attached to 0000:00:12.0 00:08:25.501 Cleaning up... 00:08:25.501 ************************************ 00:08:25.501 END TEST nvme_e2edp 00:08:25.501 ************************************ 00:08:25.501 00:08:25.501 real 0m0.188s 00:08:25.501 user 0m0.063s 00:08:25.501 sys 0m0.076s 00:08:25.501 20:50:43 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.501 20:50:43 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:25.501 20:50:43 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:25.501 20:50:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.501 20:50:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.501 20:50:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.501 ************************************ 00:08:25.501 START TEST nvme_reserve 00:08:25.501 ************************************ 00:08:25.501 20:50:43 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:25.501 ===================================================== 00:08:25.501 NVMe Controller at PCI bus 0, device 16, function 0 00:08:25.501 ===================================================== 00:08:25.501 Reservations: Not Supported 00:08:25.501 ===================================================== 00:08:25.501 NVMe Controller at PCI bus 0, device 17, function 0 00:08:25.501 ===================================================== 00:08:25.501 Reservations: Not Supported 00:08:25.501 ===================================================== 00:08:25.501 NVMe Controller at PCI bus 0, device 19, function 0 00:08:25.501 ===================================================== 00:08:25.501 Reservations: Not Supported 00:08:25.501 ===================================================== 00:08:25.501 NVMe Controller at PCI bus 0, device 18, function 0 00:08:25.501 ===================================================== 00:08:25.501 Reservations: Not Supported 00:08:25.501 Reservation test passed 00:08:25.501 00:08:25.501 real 0m0.179s 00:08:25.501 user 0m0.068s 00:08:25.501 sys 0m0.071s 00:08:25.501 20:50:43 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.501 20:50:43 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:25.501 ************************************ 00:08:25.501 END TEST nvme_reserve 00:08:25.501 ************************************ 00:08:25.759 20:50:43 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:25.759 20:50:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.759 20:50:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.759 20:50:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.759 ************************************ 00:08:25.759 START TEST nvme_err_injection 00:08:25.759 ************************************ 00:08:25.759 20:50:43 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:25.759 NVMe Error Injection test 00:08:25.759 Attached to 0000:00:10.0 00:08:25.759 Attached to 0000:00:11.0 00:08:25.759 Attached to 0000:00:13.0 00:08:25.759 Attached to 0000:00:12.0 00:08:25.759 0000:00:10.0: get features failed as expected 00:08:25.759 0000:00:11.0: get features failed as expected 00:08:25.759 0000:00:13.0: get features failed as expected 00:08:25.759 0000:00:12.0: get features failed as expected 00:08:25.759 0000:00:10.0: get features successfully as expected 00:08:25.759 0000:00:11.0: get features successfully as expected 00:08:25.759 0000:00:13.0: get features successfully as expected 00:08:25.759 0000:00:12.0: get features successfully as expected 00:08:25.759 0000:00:11.0: read failed as expected 00:08:25.759 0000:00:13.0: read failed as expected 00:08:25.759 0000:00:12.0: read failed as expected 00:08:25.759 0000:00:10.0: read failed as expected 00:08:25.759 0000:00:11.0: read successfully as expected 00:08:25.759 0000:00:13.0: read successfully as expected 00:08:25.759 0000:00:12.0: read successfully as expected 00:08:25.759 0000:00:10.0: read successfully as expected 00:08:25.759 Cleaning up... 00:08:25.759 00:08:25.759 real 0m0.188s 00:08:25.759 user 0m0.066s 00:08:25.759 sys 0m0.078s 00:08:25.759 20:50:43 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.759 20:50:43 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:25.759 ************************************ 00:08:25.759 END TEST nvme_err_injection 00:08:25.759 ************************************ 00:08:25.759 20:50:43 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:25.759 20:50:43 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:25.759 20:50:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.759 20:50:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.759 ************************************ 00:08:25.759 START TEST nvme_overhead 00:08:25.759 ************************************ 00:08:25.759 20:50:43 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:27.134 Initializing NVMe Controllers 00:08:27.134 Attached to 0000:00:10.0 00:08:27.134 Attached to 0000:00:11.0 00:08:27.134 Attached to 0000:00:13.0 00:08:27.134 Attached to 0000:00:12.0 00:08:27.134 Initialization complete. Launching workers. 00:08:27.134 submit (in ns) avg, min, max = 12092.2, 11210.8, 51293.8 00:08:27.134 complete (in ns) avg, min, max = 8040.1, 7245.4, 160831.5 00:08:27.134 00:08:27.134 Submit histogram 00:08:27.134 ================ 00:08:27.134 Range in us Cumulative Count 00:08:27.134 11.175 - 11.225: 0.0061% ( 1) 00:08:27.134 11.471 - 11.520: 0.1840% ( 29) 00:08:27.134 11.520 - 11.569: 0.6010% ( 68) 00:08:27.134 11.569 - 11.618: 2.1343% ( 250) 00:08:27.134 11.618 - 11.668: 6.0779% ( 643) 00:08:27.134 11.668 - 11.717: 13.4499% ( 1202) 00:08:27.134 11.717 - 11.766: 22.9745% ( 1553) 00:08:27.134 11.766 - 11.815: 33.8485% ( 1773) 00:08:27.134 11.815 - 11.865: 44.8513% ( 1794) 00:08:27.134 11.865 - 11.914: 55.3879% ( 1718) 00:08:27.134 11.914 - 11.963: 63.8209% ( 1375) 00:08:27.134 11.963 - 12.012: 70.3772% ( 1069) 00:08:27.134 12.012 - 12.062: 75.4125% ( 821) 00:08:27.134 12.062 - 12.111: 79.5278% ( 671) 00:08:27.134 12.111 - 12.160: 82.9807% ( 563) 00:08:27.134 12.160 - 12.209: 85.8387% ( 466) 00:08:27.134 12.209 - 12.258: 88.2735% ( 397) 00:08:27.134 12.258 - 12.308: 90.1871% ( 312) 00:08:27.134 12.308 - 12.357: 91.6467% ( 238) 00:08:27.134 12.357 - 12.406: 92.8059% ( 189) 00:08:27.134 12.406 - 12.455: 93.6768% ( 142) 00:08:27.134 12.455 - 12.505: 94.3453% ( 109) 00:08:27.134 12.505 - 12.554: 94.8114% ( 76) 00:08:27.134 12.554 - 12.603: 95.1549% ( 56) 00:08:27.134 12.603 - 12.702: 95.6210% ( 76) 00:08:27.134 12.702 - 12.800: 95.9215% ( 49) 00:08:27.134 12.800 - 12.898: 96.0810% ( 26) 00:08:27.134 12.898 - 12.997: 96.2036% ( 20) 00:08:27.134 12.997 - 13.095: 96.3140% ( 18) 00:08:27.134 13.095 - 13.194: 96.4305% ( 19) 00:08:27.134 13.194 - 13.292: 96.5041% ( 12) 00:08:27.134 13.292 - 13.391: 96.5716% ( 11) 00:08:27.134 13.391 - 13.489: 96.6207% ( 8) 00:08:27.134 13.489 - 13.588: 96.6636% ( 7) 00:08:27.134 13.588 - 13.686: 96.7065% ( 7) 00:08:27.134 13.686 - 13.785: 96.7985% ( 15) 00:08:27.134 13.785 - 13.883: 96.9089% ( 18) 00:08:27.134 13.883 - 13.982: 97.1113% ( 33) 00:08:27.134 13.982 - 14.080: 97.3934% ( 46) 00:08:27.134 14.080 - 14.178: 97.6081% ( 35) 00:08:27.134 14.178 - 14.277: 97.7185% ( 18) 00:08:27.134 14.277 - 14.375: 97.7921% ( 12) 00:08:27.134 14.375 - 14.474: 97.8718% ( 13) 00:08:27.134 14.474 - 14.572: 97.9209% ( 8) 00:08:27.134 14.572 - 14.671: 97.9577% ( 6) 00:08:27.134 14.671 - 14.769: 98.0129% ( 9) 00:08:27.134 14.769 - 14.868: 98.0313% ( 3) 00:08:27.134 14.966 - 15.065: 98.0558% ( 4) 00:08:27.134 15.065 - 15.163: 98.0681% ( 2) 00:08:27.134 15.163 - 15.262: 98.0803% ( 2) 00:08:27.134 15.262 - 15.360: 98.1294% ( 8) 00:08:27.134 15.360 - 15.458: 98.1478% ( 3) 00:08:27.134 15.458 - 15.557: 98.1601% ( 2) 00:08:27.134 15.557 - 15.655: 98.1785% ( 3) 00:08:27.134 15.655 - 15.754: 98.2398% ( 10) 00:08:27.134 15.754 - 15.852: 98.2582% ( 3) 00:08:27.134 15.852 - 15.951: 98.2889% ( 5) 00:08:27.134 15.951 - 16.049: 98.3195% ( 5) 00:08:27.134 16.049 - 16.148: 98.3441% ( 4) 00:08:27.134 16.148 - 16.246: 98.3625% ( 3) 00:08:27.134 16.246 - 16.345: 98.3747% ( 2) 00:08:27.134 16.345 - 16.443: 98.3870% ( 2) 00:08:27.134 16.443 - 16.542: 98.4054% ( 3) 00:08:27.134 16.542 - 16.640: 98.4115% ( 1) 00:08:27.134 16.640 - 16.738: 98.4422% ( 5) 00:08:27.134 16.738 - 16.837: 98.4667% ( 4) 00:08:27.134 16.837 - 16.935: 98.4790% ( 2) 00:08:27.134 16.935 - 17.034: 98.5097% ( 5) 00:08:27.135 17.132 - 17.231: 98.5219% ( 2) 00:08:27.135 17.231 - 17.329: 98.5465% ( 4) 00:08:27.135 17.329 - 17.428: 98.5649% ( 3) 00:08:27.135 17.428 - 17.526: 98.6262% ( 10) 00:08:27.135 17.526 - 17.625: 98.6937% ( 11) 00:08:27.135 17.625 - 17.723: 98.7795% ( 14) 00:08:27.135 17.723 - 17.822: 98.8899% ( 18) 00:08:27.135 17.822 - 17.920: 98.9635% ( 12) 00:08:27.135 17.920 - 18.018: 99.0126% ( 8) 00:08:27.135 18.018 - 18.117: 99.0862% ( 12) 00:08:27.135 18.117 - 18.215: 99.1536% ( 11) 00:08:27.135 18.215 - 18.314: 99.2088% ( 9) 00:08:27.135 18.314 - 18.412: 99.3008% ( 15) 00:08:27.135 18.412 - 18.511: 99.3438% ( 7) 00:08:27.135 18.511 - 18.609: 99.3928% ( 8) 00:08:27.135 18.609 - 18.708: 99.4480% ( 9) 00:08:27.135 18.708 - 18.806: 99.4848% ( 6) 00:08:27.135 18.806 - 18.905: 99.5278% ( 7) 00:08:27.135 18.905 - 19.003: 99.5584% ( 5) 00:08:27.135 19.003 - 19.102: 99.5830% ( 4) 00:08:27.135 19.102 - 19.200: 99.6136% ( 5) 00:08:27.135 19.200 - 19.298: 99.6197% ( 1) 00:08:27.135 19.298 - 19.397: 99.6259% ( 1) 00:08:27.135 19.397 - 19.495: 99.6565% ( 5) 00:08:27.135 19.594 - 19.692: 99.6872% ( 5) 00:08:27.135 19.692 - 19.791: 99.6995% ( 2) 00:08:27.135 19.791 - 19.889: 99.7179% ( 3) 00:08:27.135 19.889 - 19.988: 99.7240% ( 1) 00:08:27.135 19.988 - 20.086: 99.7301% ( 1) 00:08:27.135 20.086 - 20.185: 99.7363% ( 1) 00:08:27.135 20.185 - 20.283: 99.7485% ( 2) 00:08:27.135 20.283 - 20.382: 99.7547% ( 1) 00:08:27.135 20.382 - 20.480: 99.7731% ( 3) 00:08:27.135 20.578 - 20.677: 99.7792% ( 1) 00:08:27.135 20.775 - 20.874: 99.7853% ( 1) 00:08:27.135 20.874 - 20.972: 99.7915% ( 1) 00:08:27.135 20.972 - 21.071: 99.8037% ( 2) 00:08:27.135 21.071 - 21.169: 99.8099% ( 1) 00:08:27.135 21.169 - 21.268: 99.8160% ( 1) 00:08:27.135 21.268 - 21.366: 99.8221% ( 1) 00:08:27.135 21.465 - 21.563: 99.8283% ( 1) 00:08:27.135 21.662 - 21.760: 99.8405% ( 2) 00:08:27.135 21.760 - 21.858: 99.8467% ( 1) 00:08:27.135 22.055 - 22.154: 99.8528% ( 1) 00:08:27.135 22.154 - 22.252: 99.8589% ( 1) 00:08:27.135 22.351 - 22.449: 99.8651% ( 1) 00:08:27.135 22.449 - 22.548: 99.8773% ( 2) 00:08:27.135 22.548 - 22.646: 99.8957% ( 3) 00:08:27.135 22.745 - 22.843: 99.9080% ( 2) 00:08:27.135 23.040 - 23.138: 99.9141% ( 1) 00:08:27.135 23.138 - 23.237: 99.9203% ( 1) 00:08:27.135 23.237 - 23.335: 99.9264% ( 1) 00:08:27.135 23.631 - 23.729: 99.9325% ( 1) 00:08:27.135 24.222 - 24.320: 99.9387% ( 1) 00:08:27.135 24.418 - 24.517: 99.9448% ( 1) 00:08:27.135 29.538 - 29.735: 99.9509% ( 1) 00:08:27.135 31.114 - 31.311: 99.9571% ( 1) 00:08:27.135 34.855 - 35.052: 99.9632% ( 1) 00:08:27.135 36.431 - 36.628: 99.9693% ( 1) 00:08:27.135 37.022 - 37.218: 99.9755% ( 1) 00:08:27.135 39.188 - 39.385: 99.9816% ( 1) 00:08:27.135 43.717 - 43.914: 99.9877% ( 1) 00:08:27.135 48.443 - 48.640: 99.9939% ( 1) 00:08:27.135 51.200 - 51.594: 100.0000% ( 1) 00:08:27.135 00:08:27.135 Complete histogram 00:08:27.135 ================== 00:08:27.135 Range in us Cumulative Count 00:08:27.135 7.237 - 7.286: 0.1472% ( 24) 00:08:27.135 7.286 - 7.335: 0.6685% ( 85) 00:08:27.135 7.335 - 7.385: 1.1346% ( 76) 00:08:27.135 7.385 - 7.434: 1.3799% ( 40) 00:08:27.135 7.434 - 7.483: 1.5210% ( 23) 00:08:27.135 7.483 - 7.532: 1.6130% ( 15) 00:08:27.135 7.532 - 7.582: 1.7050% ( 15) 00:08:27.135 7.582 - 7.631: 1.7847% ( 13) 00:08:27.135 7.631 - 7.680: 1.9503% ( 27) 00:08:27.135 7.680 - 7.729: 5.4707% ( 574) 00:08:27.135 7.729 - 7.778: 20.3987% ( 2434) 00:08:27.135 7.778 - 7.828: 34.1797% ( 2247) 00:08:27.135 7.828 - 7.877: 44.8758% ( 1744) 00:08:27.135 7.877 - 7.926: 61.1898% ( 2660) 00:08:27.135 7.926 - 7.975: 73.2168% ( 1961) 00:08:27.135 7.975 - 8.025: 81.4351% ( 1340) 00:08:27.135 8.025 - 8.074: 87.6909% ( 1020) 00:08:27.135 8.074 - 8.123: 91.4934% ( 620) 00:08:27.135 8.123 - 8.172: 94.0325% ( 414) 00:08:27.135 8.172 - 8.222: 95.5535% ( 248) 00:08:27.135 8.222 - 8.271: 96.5532% ( 163) 00:08:27.135 8.271 - 8.320: 97.1665% ( 100) 00:08:27.135 8.320 - 8.369: 97.5100% ( 56) 00:08:27.135 8.369 - 8.418: 97.7124% ( 33) 00:08:27.135 8.418 - 8.468: 97.8473% ( 22) 00:08:27.135 8.468 - 8.517: 97.9025% ( 9) 00:08:27.135 8.517 - 8.566: 97.9699% ( 11) 00:08:27.135 8.566 - 8.615: 97.9945% ( 4) 00:08:27.135 8.615 - 8.665: 98.0067% ( 2) 00:08:27.135 8.714 - 8.763: 98.0313% ( 4) 00:08:27.135 8.763 - 8.812: 98.0435% ( 2) 00:08:27.135 8.812 - 8.862: 98.0558% ( 2) 00:08:27.135 8.862 - 8.911: 98.0619% ( 1) 00:08:27.135 8.911 - 8.960: 98.0865% ( 4) 00:08:27.135 8.960 - 9.009: 98.0926% ( 1) 00:08:27.135 9.009 - 9.058: 98.0987% ( 1) 00:08:27.135 9.058 - 9.108: 98.1110% ( 2) 00:08:27.135 9.108 - 9.157: 98.1171% ( 1) 00:08:27.135 9.157 - 9.206: 98.1294% ( 2) 00:08:27.135 9.206 - 9.255: 98.1417% ( 2) 00:08:27.135 9.403 - 9.452: 98.1539% ( 2) 00:08:27.135 9.452 - 9.502: 98.1662% ( 2) 00:08:27.135 9.502 - 9.551: 98.1785% ( 2) 00:08:27.135 9.551 - 9.600: 98.1846% ( 1) 00:08:27.135 9.698 - 9.748: 98.2030% ( 3) 00:08:27.135 9.748 - 9.797: 98.2091% ( 1) 00:08:27.135 9.797 - 9.846: 98.2153% ( 1) 00:08:27.135 9.895 - 9.945: 98.2214% ( 1) 00:08:27.135 9.945 - 9.994: 98.2398% ( 3) 00:08:27.135 10.142 - 10.191: 98.2459% ( 1) 00:08:27.135 10.929 - 10.978: 98.2521% ( 1) 00:08:27.135 11.028 - 11.077: 98.2582% ( 1) 00:08:27.135 11.323 - 11.372: 98.2643% ( 1) 00:08:27.135 11.520 - 11.569: 98.2705% ( 1) 00:08:27.135 11.766 - 11.815: 98.2766% ( 1) 00:08:27.135 11.963 - 12.012: 98.2827% ( 1) 00:08:27.135 12.062 - 12.111: 98.2889% ( 1) 00:08:27.135 12.160 - 12.209: 98.3011% ( 2) 00:08:27.135 12.209 - 12.258: 98.3134% ( 2) 00:08:27.135 12.357 - 12.406: 98.3195% ( 1) 00:08:27.135 12.455 - 12.505: 98.3318% ( 2) 00:08:27.135 12.505 - 12.554: 98.3379% ( 1) 00:08:27.135 12.603 - 12.702: 98.3502% ( 2) 00:08:27.135 12.702 - 12.800: 98.3563% ( 1) 00:08:27.135 12.800 - 12.898: 98.3625% ( 1) 00:08:27.135 13.095 - 13.194: 98.3809% ( 3) 00:08:27.135 13.194 - 13.292: 98.4054% ( 4) 00:08:27.135 13.292 - 13.391: 98.4422% ( 6) 00:08:27.135 13.391 - 13.489: 98.4913% ( 8) 00:08:27.135 13.489 - 13.588: 98.5526% ( 10) 00:08:27.135 13.588 - 13.686: 98.6017% ( 8) 00:08:27.135 13.686 - 13.785: 98.6814% ( 13) 00:08:27.135 13.785 - 13.883: 98.7427% ( 10) 00:08:27.135 13.883 - 13.982: 98.7734% ( 5) 00:08:27.135 13.982 - 14.080: 98.8347% ( 10) 00:08:27.135 14.080 - 14.178: 98.8715% ( 6) 00:08:27.135 14.178 - 14.277: 98.9635% ( 15) 00:08:27.135 14.277 - 14.375: 99.0187% ( 9) 00:08:27.135 14.375 - 14.474: 99.1168% ( 16) 00:08:27.135 14.474 - 14.572: 99.2027% ( 14) 00:08:27.135 14.572 - 14.671: 99.2395% ( 6) 00:08:27.135 14.671 - 14.769: 99.3438% ( 17) 00:08:27.135 14.769 - 14.868: 99.3867% ( 7) 00:08:27.135 14.868 - 14.966: 99.4603% ( 12) 00:08:27.135 14.966 - 15.065: 99.5216% ( 10) 00:08:27.135 15.065 - 15.163: 99.5707% ( 8) 00:08:27.135 15.163 - 15.262: 99.6075% ( 6) 00:08:27.135 15.262 - 15.360: 99.6320% ( 4) 00:08:27.135 15.360 - 15.458: 99.6504% ( 3) 00:08:27.135 15.458 - 15.557: 99.6872% ( 6) 00:08:27.135 15.557 - 15.655: 99.6995% ( 2) 00:08:27.135 15.655 - 15.754: 99.7117% ( 2) 00:08:27.135 15.951 - 16.049: 99.7179% ( 1) 00:08:27.135 16.049 - 16.148: 99.7301% ( 2) 00:08:27.135 16.148 - 16.246: 99.7363% ( 1) 00:08:27.135 16.246 - 16.345: 99.7424% ( 1) 00:08:27.135 16.345 - 16.443: 99.7547% ( 2) 00:08:27.135 16.443 - 16.542: 99.7731% ( 3) 00:08:27.135 16.542 - 16.640: 99.7792% ( 1) 00:08:27.135 17.034 - 17.132: 99.7853% ( 1) 00:08:27.135 17.329 - 17.428: 99.7915% ( 1) 00:08:27.135 17.428 - 17.526: 99.7976% ( 1) 00:08:27.135 17.526 - 17.625: 99.8037% ( 1) 00:08:27.135 18.018 - 18.117: 99.8160% ( 2) 00:08:27.135 18.117 - 18.215: 99.8283% ( 2) 00:08:27.135 18.215 - 18.314: 99.8344% ( 1) 00:08:27.135 18.314 - 18.412: 99.8467% ( 2) 00:08:27.135 18.511 - 18.609: 99.8589% ( 2) 00:08:27.135 18.806 - 18.905: 99.8651% ( 1) 00:08:27.135 18.905 - 19.003: 99.8712% ( 1) 00:08:27.135 19.003 - 19.102: 99.8773% ( 1) 00:08:27.135 19.200 - 19.298: 99.8896% ( 2) 00:08:27.135 19.594 - 19.692: 99.8957% ( 1) 00:08:27.135 19.692 - 19.791: 99.9019% ( 1) 00:08:27.135 19.791 - 19.889: 99.9080% ( 1) 00:08:27.135 19.889 - 19.988: 99.9141% ( 1) 00:08:27.135 20.086 - 20.185: 99.9203% ( 1) 00:08:27.135 20.283 - 20.382: 99.9264% ( 1) 00:08:27.135 20.677 - 20.775: 99.9325% ( 1) 00:08:27.135 21.465 - 21.563: 99.9387% ( 1) 00:08:27.135 22.252 - 22.351: 99.9448% ( 1) 00:08:27.136 22.646 - 22.745: 99.9509% ( 1) 00:08:27.136 23.532 - 23.631: 99.9571% ( 1) 00:08:27.136 32.689 - 32.886: 99.9632% ( 1) 00:08:27.136 33.280 - 33.477: 99.9693% ( 1) 00:08:27.136 39.975 - 40.172: 99.9755% ( 1) 00:08:27.136 53.563 - 53.957: 99.9816% ( 1) 00:08:27.136 55.532 - 55.926: 99.9877% ( 1) 00:08:27.136 60.652 - 61.046: 99.9939% ( 1) 00:08:27.136 160.689 - 161.477: 100.0000% ( 1) 00:08:27.136 00:08:27.136 ************************************ 00:08:27.136 END TEST nvme_overhead 00:08:27.136 ************************************ 00:08:27.136 00:08:27.136 real 0m1.200s 00:08:27.136 user 0m1.057s 00:08:27.136 sys 0m0.097s 00:08:27.136 20:50:45 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.136 20:50:45 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:27.136 20:50:45 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:27.136 20:50:45 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:27.136 20:50:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.136 20:50:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.136 ************************************ 00:08:27.136 START TEST nvme_arbitration 00:08:27.136 ************************************ 00:08:27.136 20:50:45 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:30.430 Initializing NVMe Controllers 00:08:30.430 Attached to 0000:00:10.0 00:08:30.430 Attached to 0000:00:11.0 00:08:30.430 Attached to 0000:00:13.0 00:08:30.430 Attached to 0000:00:12.0 00:08:30.430 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:30.430 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:30.430 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:30.430 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:30.430 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:30.430 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:30.430 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:30.430 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:30.430 Initialization complete. Launching workers. 00:08:30.430 Starting thread on core 1 with urgent priority queue 00:08:30.430 Starting thread on core 2 with urgent priority queue 00:08:30.430 Starting thread on core 3 with urgent priority queue 00:08:30.430 Starting thread on core 0 with urgent priority queue 00:08:30.430 QEMU NVMe Ctrl (12340 ) core 0: 6172.67 IO/s 16.20 secs/100000 ios 00:08:30.430 QEMU NVMe Ctrl (12342 ) core 0: 6165.33 IO/s 16.22 secs/100000 ios 00:08:30.430 QEMU NVMe Ctrl (12341 ) core 1: 6442.33 IO/s 15.52 secs/100000 ios 00:08:30.430 QEMU NVMe Ctrl (12342 ) core 1: 6450.33 IO/s 15.50 secs/100000 ios 00:08:30.430 QEMU NVMe Ctrl (12343 ) core 2: 6159.33 IO/s 16.24 secs/100000 ios 00:08:30.430 QEMU NVMe Ctrl (12342 ) core 3: 6181.67 IO/s 16.18 secs/100000 ios 00:08:30.430 ======================================================== 00:08:30.430 00:08:30.430 00:08:30.430 real 0m3.223s 00:08:30.430 user 0m9.037s 00:08:30.430 sys 0m0.093s 00:08:30.430 20:50:48 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.430 20:50:48 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:30.430 ************************************ 00:08:30.430 END TEST nvme_arbitration 00:08:30.430 ************************************ 00:08:30.430 20:50:48 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:30.430 20:50:48 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:30.430 20:50:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.430 20:50:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.430 ************************************ 00:08:30.430 START TEST nvme_single_aen 00:08:30.430 ************************************ 00:08:30.430 20:50:48 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:30.691 Asynchronous Event Request test 00:08:30.691 Attached to 0000:00:10.0 00:08:30.691 Attached to 0000:00:11.0 00:08:30.691 Attached to 0000:00:13.0 00:08:30.691 Attached to 0000:00:12.0 00:08:30.691 Reset controller to setup AER completions for this process 00:08:30.691 Registering asynchronous event callbacks... 00:08:30.691 Getting orig temperature thresholds of all controllers 00:08:30.691 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.691 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.691 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.691 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:30.691 Setting all controllers temperature threshold low to trigger AER 00:08:30.691 Waiting for all controllers temperature threshold to be set lower 00:08:30.691 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.691 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:30.691 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.691 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:30.691 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.691 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:30.691 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:30.691 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:30.691 Waiting for all controllers to trigger AER and reset threshold 00:08:30.691 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.691 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.691 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.691 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.691 Cleaning up... 00:08:30.691 00:08:30.691 real 0m0.214s 00:08:30.691 user 0m0.077s 00:08:30.691 sys 0m0.090s 00:08:30.691 20:50:48 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.691 ************************************ 00:08:30.691 END TEST nvme_single_aen 00:08:30.691 ************************************ 00:08:30.691 20:50:48 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:30.691 20:50:48 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:30.691 20:50:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.691 20:50:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.691 20:50:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.691 ************************************ 00:08:30.691 START TEST nvme_doorbell_aers 00:08:30.691 ************************************ 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:30.691 20:50:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:30.951 [2024-11-20 20:50:48.854952] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:08:41.010 Executing: test_write_invalid_db 00:08:41.010 Waiting for AER completion... 00:08:41.010 Failure: test_write_invalid_db 00:08:41.010 00:08:41.010 Executing: test_invalid_db_write_overflow_sq 00:08:41.010 Waiting for AER completion... 00:08:41.010 Failure: test_invalid_db_write_overflow_sq 00:08:41.010 00:08:41.010 Executing: test_invalid_db_write_overflow_cq 00:08:41.011 Waiting for AER completion... 00:08:41.011 Failure: test_invalid_db_write_overflow_cq 00:08:41.011 00:08:41.011 20:50:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:41.011 20:50:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:41.011 [2024-11-20 20:50:58.895716] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:08:50.979 Executing: test_write_invalid_db 00:08:50.979 Waiting for AER completion... 00:08:50.979 Failure: test_write_invalid_db 00:08:50.979 00:08:50.979 Executing: test_invalid_db_write_overflow_sq 00:08:50.979 Waiting for AER completion... 00:08:50.979 Failure: test_invalid_db_write_overflow_sq 00:08:50.979 00:08:50.979 Executing: test_invalid_db_write_overflow_cq 00:08:50.979 Waiting for AER completion... 00:08:50.979 Failure: test_invalid_db_write_overflow_cq 00:08:50.979 00:08:50.979 20:51:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:50.979 20:51:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:50.979 [2024-11-20 20:51:08.924741] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:00.987 Executing: test_write_invalid_db 00:09:00.987 Waiting for AER completion... 00:09:00.987 Failure: test_write_invalid_db 00:09:00.987 00:09:00.987 Executing: test_invalid_db_write_overflow_sq 00:09:00.987 Waiting for AER completion... 00:09:00.987 Failure: test_invalid_db_write_overflow_sq 00:09:00.987 00:09:00.987 Executing: test_invalid_db_write_overflow_cq 00:09:00.987 Waiting for AER completion... 00:09:00.987 Failure: test_invalid_db_write_overflow_cq 00:09:00.987 00:09:00.987 20:51:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:00.987 20:51:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:00.987 [2024-11-20 20:51:18.950444] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.958 Executing: test_write_invalid_db 00:09:10.958 Waiting for AER completion... 00:09:10.958 Failure: test_write_invalid_db 00:09:10.958 00:09:10.958 Executing: test_invalid_db_write_overflow_sq 00:09:10.958 Waiting for AER completion... 00:09:10.958 Failure: test_invalid_db_write_overflow_sq 00:09:10.958 00:09:10.958 Executing: test_invalid_db_write_overflow_cq 00:09:10.958 Waiting for AER completion... 00:09:10.958 Failure: test_invalid_db_write_overflow_cq 00:09:10.958 00:09:10.958 00:09:10.958 real 0m40.184s 00:09:10.958 user 0m34.360s 00:09:10.958 sys 0m5.465s 00:09:10.958 20:51:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.958 ************************************ 00:09:10.958 END TEST nvme_doorbell_aers 00:09:10.958 20:51:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:10.958 ************************************ 00:09:10.958 20:51:28 nvme -- nvme/nvme.sh@97 -- # uname 00:09:10.958 20:51:28 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:10.958 20:51:28 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:10.958 20:51:28 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:10.958 20:51:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.958 20:51:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.958 ************************************ 00:09:10.958 START TEST nvme_multi_aen 00:09:10.958 ************************************ 00:09:10.958 20:51:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:10.958 [2024-11-20 20:51:29.004429] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.958 [2024-11-20 20:51:29.004490] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.958 [2024-11-20 20:51:29.004501] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.958 [2024-11-20 20:51:29.005757] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.959 [2024-11-20 20:51:29.005810] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.959 [2024-11-20 20:51:29.005817] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.959 [2024-11-20 20:51:29.006808] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.959 [2024-11-20 20:51:29.006830] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.959 [2024-11-20 20:51:29.006837] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.959 [2024-11-20 20:51:29.008080] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.959 [2024-11-20 20:51:29.008164] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.959 [2024-11-20 20:51:29.008216] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 74773) is not found. Dropping the request. 00:09:10.959 Child process pid: 75295 00:09:11.225 [Child] Asynchronous Event Request test 00:09:11.225 [Child] Attached to 0000:00:10.0 00:09:11.225 [Child] Attached to 0000:00:11.0 00:09:11.225 [Child] Attached to 0000:00:13.0 00:09:11.225 [Child] Attached to 0000:00:12.0 00:09:11.225 [Child] Registering asynchronous event callbacks... 00:09:11.225 [Child] Getting orig temperature thresholds of all controllers 00:09:11.225 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.225 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.225 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.225 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.225 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:11.225 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.225 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.225 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.225 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.225 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.225 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.225 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.225 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.225 [Child] Cleaning up... 00:09:11.225 Asynchronous Event Request test 00:09:11.225 Attached to 0000:00:10.0 00:09:11.225 Attached to 0000:00:11.0 00:09:11.225 Attached to 0000:00:13.0 00:09:11.225 Attached to 0000:00:12.0 00:09:11.225 Reset controller to setup AER completions for this process 00:09:11.225 Registering asynchronous event callbacks... 00:09:11.225 Getting orig temperature thresholds of all controllers 00:09:11.225 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.225 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.225 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.225 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:11.225 Setting all controllers temperature threshold low to trigger AER 00:09:11.225 Waiting for all controllers temperature threshold to be set lower 00:09:11.225 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.225 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:11.225 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.225 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:11.225 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.225 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:11.225 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:11.225 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:11.225 Waiting for all controllers to trigger AER and reset threshold 00:09:11.225 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.225 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.225 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.225 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.225 Cleaning up... 00:09:11.225 00:09:11.225 real 0m0.372s 00:09:11.225 user 0m0.125s 00:09:11.225 sys 0m0.145s 00:09:11.225 20:51:29 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.225 ************************************ 00:09:11.225 20:51:29 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:11.225 END TEST nvme_multi_aen 00:09:11.225 ************************************ 00:09:11.225 20:51:29 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:11.225 20:51:29 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:11.225 20:51:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.225 20:51:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.225 ************************************ 00:09:11.225 START TEST nvme_startup 00:09:11.225 ************************************ 00:09:11.225 20:51:29 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:11.483 Initializing NVMe Controllers 00:09:11.483 Attached to 0000:00:10.0 00:09:11.483 Attached to 0000:00:11.0 00:09:11.483 Attached to 0000:00:13.0 00:09:11.483 Attached to 0000:00:12.0 00:09:11.483 Initialization complete. 00:09:11.483 Time used:120762.773 (us). 00:09:11.483 00:09:11.483 real 0m0.172s 00:09:11.483 user 0m0.055s 00:09:11.483 sys 0m0.078s 00:09:11.483 20:51:29 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.483 ************************************ 00:09:11.484 20:51:29 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:11.484 END TEST nvme_startup 00:09:11.484 ************************************ 00:09:11.484 20:51:29 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:11.484 20:51:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.484 20:51:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.484 20:51:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.484 ************************************ 00:09:11.484 START TEST nvme_multi_secondary 00:09:11.484 ************************************ 00:09:11.484 20:51:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:11.484 20:51:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75344 00:09:11.484 20:51:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:11.484 20:51:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75345 00:09:11.484 20:51:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:11.484 20:51:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:14.785 Initializing NVMe Controllers 00:09:14.785 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.785 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.785 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.785 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.785 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:14.785 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:14.785 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:14.785 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:14.785 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:14.785 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:14.785 Initialization complete. Launching workers. 00:09:14.785 ======================================================== 00:09:14.785 Latency(us) 00:09:14.785 Device Information : IOPS MiB/s Average min max 00:09:14.785 PCIE (0000:00:10.0) NSID 1 from core 1: 6307.86 24.64 2534.98 960.46 8120.06 00:09:14.785 PCIE (0000:00:11.0) NSID 1 from core 1: 6307.86 24.64 2535.97 988.31 7685.59 00:09:14.785 PCIE (0000:00:13.0) NSID 1 from core 1: 6307.86 24.64 2535.90 898.98 7917.30 00:09:14.785 PCIE (0000:00:12.0) NSID 1 from core 1: 6307.86 24.64 2536.01 974.65 7912.30 00:09:14.785 PCIE (0000:00:12.0) NSID 2 from core 1: 6307.86 24.64 2535.97 832.92 7965.83 00:09:14.785 PCIE (0000:00:12.0) NSID 3 from core 1: 6307.86 24.64 2535.90 971.21 8102.46 00:09:14.785 ======================================================== 00:09:14.785 Total : 37847.18 147.84 2535.79 832.92 8120.06 00:09:14.785 00:09:14.785 Initializing NVMe Controllers 00:09:14.785 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.785 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.785 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.785 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.785 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:14.785 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:14.785 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:14.785 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:14.785 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:14.785 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:14.785 Initialization complete. Launching workers. 00:09:14.785 ======================================================== 00:09:14.785 Latency(us) 00:09:14.785 Device Information : IOPS MiB/s Average min max 00:09:14.785 PCIE (0000:00:10.0) NSID 1 from core 2: 1869.98 7.30 8553.61 1476.01 19228.81 00:09:14.785 PCIE (0000:00:11.0) NSID 1 from core 2: 1869.98 7.30 8555.27 1084.98 19649.28 00:09:14.785 PCIE (0000:00:13.0) NSID 1 from core 2: 1869.98 7.30 8556.39 1530.30 19820.85 00:09:14.785 PCIE (0000:00:12.0) NSID 1 from core 2: 1869.98 7.30 8556.18 1502.48 20915.43 00:09:14.785 PCIE (0000:00:12.0) NSID 2 from core 2: 1869.98 7.30 8555.68 1301.99 23218.28 00:09:14.785 PCIE (0000:00:12.0) NSID 3 from core 2: 1869.98 7.30 8557.08 1426.94 19255.54 00:09:14.785 ======================================================== 00:09:14.785 Total : 11219.89 43.83 8555.70 1084.98 23218.28 00:09:14.785 00:09:14.785 20:51:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75344 00:09:16.699 Initializing NVMe Controllers 00:09:16.699 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:16.699 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:16.699 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:16.699 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:16.699 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:16.699 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:16.699 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:16.699 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:16.699 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:16.699 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:16.699 Initialization complete. Launching workers. 00:09:16.699 ======================================================== 00:09:16.699 Latency(us) 00:09:16.699 Device Information : IOPS MiB/s Average min max 00:09:16.699 PCIE (0000:00:10.0) NSID 1 from core 0: 8708.98 34.02 1835.94 738.33 8938.70 00:09:16.699 PCIE (0000:00:11.0) NSID 1 from core 0: 8708.58 34.02 1836.85 757.06 8988.60 00:09:16.699 PCIE (0000:00:13.0) NSID 1 from core 0: 8708.38 34.02 1836.87 626.13 9142.62 00:09:16.699 PCIE (0000:00:12.0) NSID 1 from core 0: 8709.78 34.02 1836.56 560.89 9157.12 00:09:16.699 PCIE (0000:00:12.0) NSID 2 from core 0: 8709.78 34.02 1836.53 469.15 8670.05 00:09:16.699 PCIE (0000:00:12.0) NSID 3 from core 0: 8709.78 34.02 1836.51 406.53 8726.31 00:09:16.699 ======================================================== 00:09:16.699 Total : 52255.26 204.12 1836.55 406.53 9157.12 00:09:16.699 00:09:16.699 20:51:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75345 00:09:16.699 20:51:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75414 00:09:16.699 20:51:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:16.699 20:51:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75415 00:09:16.699 20:51:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:16.699 20:51:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:20.006 Initializing NVMe Controllers 00:09:20.006 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:20.006 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:20.006 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:20.006 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:20.006 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:20.006 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:20.006 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:20.006 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:20.006 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:20.006 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:20.006 Initialization complete. Launching workers. 00:09:20.006 ======================================================== 00:09:20.006 Latency(us) 00:09:20.007 Device Information : IOPS MiB/s Average min max 00:09:20.007 PCIE (0000:00:10.0) NSID 1 from core 0: 4138.65 16.17 3864.38 789.15 9568.78 00:09:20.007 PCIE (0000:00:11.0) NSID 1 from core 0: 4138.65 16.17 3865.74 788.84 9396.97 00:09:20.007 PCIE (0000:00:13.0) NSID 1 from core 0: 4138.65 16.17 3865.75 783.58 9057.92 00:09:20.007 PCIE (0000:00:12.0) NSID 1 from core 0: 4138.65 16.17 3865.81 777.32 9297.02 00:09:20.007 PCIE (0000:00:12.0) NSID 2 from core 0: 4138.65 16.17 3866.81 791.28 8796.79 00:09:20.007 PCIE (0000:00:12.0) NSID 3 from core 0: 4138.65 16.17 3868.06 808.51 9270.76 00:09:20.007 ======================================================== 00:09:20.007 Total : 24831.91 97.00 3866.09 777.32 9568.78 00:09:20.007 00:09:20.007 Initializing NVMe Controllers 00:09:20.007 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:20.007 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:20.007 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:20.007 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:20.007 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:20.007 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:20.007 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:20.007 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:20.007 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:20.007 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:20.007 Initialization complete. Launching workers. 00:09:20.007 ======================================================== 00:09:20.007 Latency(us) 00:09:20.007 Device Information : IOPS MiB/s Average min max 00:09:20.007 PCIE (0000:00:10.0) NSID 1 from core 1: 5805.13 22.68 2754.53 914.74 9413.08 00:09:20.007 PCIE (0000:00:11.0) NSID 1 from core 1: 5805.13 22.68 2755.62 914.73 9790.26 00:09:20.007 PCIE (0000:00:13.0) NSID 1 from core 1: 5805.13 22.68 2755.66 890.29 9315.38 00:09:20.007 PCIE (0000:00:12.0) NSID 1 from core 1: 5805.13 22.68 2755.75 910.01 9550.10 00:09:20.007 PCIE (0000:00:12.0) NSID 2 from core 1: 5805.13 22.68 2755.71 891.53 8340.72 00:09:20.007 PCIE (0000:00:12.0) NSID 3 from core 1: 5805.13 22.68 2755.81 946.07 8331.57 00:09:20.007 ======================================================== 00:09:20.007 Total : 34830.79 136.06 2755.51 890.29 9790.26 00:09:20.007 00:09:21.975 Initializing NVMe Controllers 00:09:21.975 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:21.975 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:21.975 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:21.975 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:21.975 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:21.975 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:21.975 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:21.975 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:21.975 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:21.975 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:21.975 Initialization complete. Launching workers. 00:09:21.975 ======================================================== 00:09:21.975 Latency(us) 00:09:21.975 Device Information : IOPS MiB/s Average min max 00:09:21.975 PCIE (0000:00:10.0) NSID 1 from core 2: 2757.46 10.77 5800.61 1019.25 24288.36 00:09:21.975 PCIE (0000:00:11.0) NSID 1 from core 2: 2757.46 10.77 5802.18 924.75 20042.50 00:09:21.975 PCIE (0000:00:13.0) NSID 1 from core 2: 2757.46 10.77 5801.48 984.64 20845.87 00:09:21.975 PCIE (0000:00:12.0) NSID 1 from core 2: 2757.46 10.77 5801.96 991.67 21135.73 00:09:21.975 PCIE (0000:00:12.0) NSID 2 from core 2: 2757.46 10.77 5801.54 968.91 20132.03 00:09:21.975 PCIE (0000:00:12.0) NSID 3 from core 2: 2757.46 10.77 5801.71 998.38 23284.31 00:09:21.975 ======================================================== 00:09:21.975 Total : 16544.73 64.63 5801.58 924.75 24288.36 00:09:21.975 00:09:21.975 ************************************ 00:09:21.975 END TEST nvme_multi_secondary 00:09:21.975 ************************************ 00:09:21.975 20:51:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75414 00:09:21.975 20:51:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75415 00:09:21.975 00:09:21.975 real 0m10.540s 00:09:21.975 user 0m18.242s 00:09:21.975 sys 0m0.555s 00:09:21.975 20:51:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.975 20:51:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:21.975 20:51:40 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:21.975 20:51:40 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:21.975 20:51:40 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74387 ]] 00:09:21.975 20:51:40 nvme -- common/autotest_common.sh@1094 -- # kill 74387 00:09:21.975 20:51:40 nvme -- common/autotest_common.sh@1095 -- # wait 74387 00:09:21.975 [2024-11-20 20:51:40.048860] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.049544] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.049582] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.049602] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.050179] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.050212] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.050225] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.050240] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.050729] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.050776] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.050790] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.050807] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.051340] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.051381] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.051394] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:21.975 [2024-11-20 20:51:40.051409] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75293) is not found. Dropping the request. 00:09:22.249 20:51:40 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:22.249 20:51:40 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:22.249 20:51:40 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:22.249 20:51:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:22.249 20:51:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.249 20:51:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:22.249 ************************************ 00:09:22.249 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:22.249 ************************************ 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:22.249 * Looking for test storage... 00:09:22.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:22.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.249 --rc genhtml_branch_coverage=1 00:09:22.249 --rc genhtml_function_coverage=1 00:09:22.249 --rc genhtml_legend=1 00:09:22.249 --rc geninfo_all_blocks=1 00:09:22.249 --rc geninfo_unexecuted_blocks=1 00:09:22.249 00:09:22.249 ' 00:09:22.249 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:22.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.249 --rc genhtml_branch_coverage=1 00:09:22.250 --rc genhtml_function_coverage=1 00:09:22.250 --rc genhtml_legend=1 00:09:22.250 --rc geninfo_all_blocks=1 00:09:22.250 --rc geninfo_unexecuted_blocks=1 00:09:22.250 00:09:22.250 ' 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:22.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.250 --rc genhtml_branch_coverage=1 00:09:22.250 --rc genhtml_function_coverage=1 00:09:22.250 --rc genhtml_legend=1 00:09:22.250 --rc geninfo_all_blocks=1 00:09:22.250 --rc geninfo_unexecuted_blocks=1 00:09:22.250 00:09:22.250 ' 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:22.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.250 --rc genhtml_branch_coverage=1 00:09:22.250 --rc genhtml_function_coverage=1 00:09:22.250 --rc genhtml_legend=1 00:09:22.250 --rc geninfo_all_blocks=1 00:09:22.250 --rc geninfo_unexecuted_blocks=1 00:09:22.250 00:09:22.250 ' 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:22.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75579 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75579 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75579 ']' 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.250 20:51:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:22.511 [2024-11-20 20:51:40.427970] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:09:22.511 [2024-11-20 20:51:40.428349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75579 ] 00:09:22.511 [2024-11-20 20:51:40.591159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:22.772 [2024-11-20 20:51:40.636849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.772 [2024-11-20 20:51:40.637076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:22.772 [2024-11-20 20:51:40.637408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:22.772 [2024-11-20 20:51:40.637445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.346 nvme0n1 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_UmudJ.txt 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.346 true 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732135901 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75602 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:23.346 20:51:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:25.269 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:25.269 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.269 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.269 [2024-11-20 20:51:43.372673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:25.269 [2024-11-20 20:51:43.372973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:25.269 [2024-11-20 20:51:43.373008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:25.269 [2024-11-20 20:51:43.373025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:25.269 [2024-11-20 20:51:43.375152] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:25.269 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75602 00:09:25.269 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.269 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75602 00:09:25.269 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75602 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_UmudJ.txt 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:25.527 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_UmudJ.txt 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75579 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75579 ']' 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75579 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75579 00:09:25.528 killing process with pid 75579 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75579' 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75579 00:09:25.528 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75579 00:09:25.787 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:25.787 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:25.787 00:09:25.787 real 0m3.685s 00:09:25.787 user 0m12.845s 00:09:25.787 sys 0m0.679s 00:09:25.787 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.787 ************************************ 00:09:25.787 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:25.787 ************************************ 00:09:25.787 20:51:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:25.787 20:51:43 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:25.787 20:51:43 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:25.787 20:51:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.787 20:51:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.787 20:51:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:25.787 ************************************ 00:09:25.787 START TEST nvme_fio 00:09:25.787 ************************************ 00:09:25.787 20:51:43 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:25.787 20:51:43 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:25.787 20:51:43 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:25.787 20:51:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:25.787 20:51:43 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:25.787 20:51:43 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:25.787 20:51:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:25.787 20:51:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:25.787 20:51:43 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:26.049 20:51:43 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:26.049 20:51:43 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:26.049 20:51:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:26.049 20:51:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:26.049 20:51:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:26.049 20:51:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:26.049 20:51:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:26.049 20:51:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:26.049 20:51:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:26.309 20:51:44 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:26.309 20:51:44 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:26.309 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:26.309 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:26.309 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:26.309 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:26.310 20:51:44 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:26.570 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:26.570 fio-3.35 00:09:26.570 Starting 1 thread 00:09:31.859 00:09:31.859 test: (groupid=0, jobs=1): err= 0: pid=75729: Wed Nov 20 20:51:49 2024 00:09:31.859 read: IOPS=19.8k, BW=77.4MiB/s (81.1MB/s)(155MiB/2001msec) 00:09:31.859 slat (usec): min=4, max=248, avg= 5.34, stdev= 2.53 00:09:31.859 clat (usec): min=294, max=10536, avg=3216.10, stdev=887.56 00:09:31.859 lat (usec): min=298, max=10551, avg=3221.43, stdev=888.64 00:09:31.859 clat percentiles (usec): 00:09:31.859 | 1.00th=[ 2212], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:09:31.859 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2933], 60.00th=[ 3130], 00:09:31.859 | 70.00th=[ 3326], 80.00th=[ 3589], 90.00th=[ 4293], 95.00th=[ 5145], 00:09:31.859 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 9372], 99.95th=[10159], 00:09:31.859 | 99.99th=[10552] 00:09:31.859 bw ( KiB/s): min=69680, max=83712, per=97.17%, avg=76994.67, stdev=7035.05, samples=3 00:09:31.859 iops : min=17420, max=20928, avg=19248.67, stdev=1758.76, samples=3 00:09:31.859 write: IOPS=19.8k, BW=77.2MiB/s (80.9MB/s)(154MiB/2001msec); 0 zone resets 00:09:31.859 slat (nsec): min=4350, max=72702, avg=5496.41, stdev=2200.31 00:09:31.859 clat (usec): min=318, max=10785, avg=3232.82, stdev=901.39 00:09:31.859 lat (usec): min=323, max=10825, avg=3238.31, stdev=902.42 00:09:31.859 clat percentiles (usec): 00:09:31.859 | 1.00th=[ 2212], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2638], 00:09:31.859 | 30.00th=[ 2737], 40.00th=[ 2835], 50.00th=[ 2966], 60.00th=[ 3130], 00:09:31.859 | 70.00th=[ 3359], 80.00th=[ 3589], 90.00th=[ 4293], 95.00th=[ 5145], 00:09:31.859 | 99.00th=[ 6521], 99.50th=[ 7046], 99.90th=[ 9896], 99.95th=[10290], 00:09:31.859 | 99.99th=[10552] 00:09:31.859 bw ( KiB/s): min=69632, max=83808, per=97.49%, avg=77056.00, stdev=7111.85, samples=3 00:09:31.859 iops : min=17408, max=20952, avg=19264.00, stdev=1777.96, samples=3 00:09:31.859 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:31.859 lat (msec) : 2=0.25%, 4=87.01%, 10=12.63%, 20=0.08% 00:09:31.859 cpu : usr=99.05%, sys=0.05%, ctx=6, majf=0, minf=625 00:09:31.859 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:31.859 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.859 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:31.859 issued rwts: total=39637,39539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.859 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:31.859 00:09:31.859 Run status group 0 (all jobs): 00:09:31.859 READ: bw=77.4MiB/s (81.1MB/s), 77.4MiB/s-77.4MiB/s (81.1MB/s-81.1MB/s), io=155MiB (162MB), run=2001-2001msec 00:09:31.859 WRITE: bw=77.2MiB/s (80.9MB/s), 77.2MiB/s-77.2MiB/s (80.9MB/s-80.9MB/s), io=154MiB (162MB), run=2001-2001msec 00:09:32.120 ----------------------------------------------------- 00:09:32.120 Suppressions used: 00:09:32.120 count bytes template 00:09:32.120 1 32 /usr/src/fio/parse.c 00:09:32.120 1 8 libtcmalloc_minimal.so 00:09:32.120 ----------------------------------------------------- 00:09:32.120 00:09:32.121 20:51:50 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:32.121 20:51:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:32.121 20:51:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:32.121 20:51:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:32.382 20:51:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:32.382 20:51:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:32.382 20:51:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:32.382 20:51:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:32.382 20:51:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:32.644 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:32.644 fio-3.35 00:09:32.644 Starting 1 thread 00:09:39.278 00:09:39.278 test: (groupid=0, jobs=1): err= 0: pid=75786: Wed Nov 20 20:51:56 2024 00:09:39.278 read: IOPS=20.5k, BW=80.1MiB/s (84.0MB/s)(160MiB/2001msec) 00:09:39.278 slat (nsec): min=3862, max=80960, avg=6060.58, stdev=2702.92 00:09:39.278 clat (usec): min=290, max=11205, avg=3103.78, stdev=1005.48 00:09:39.278 lat (usec): min=298, max=11252, avg=3109.84, stdev=1007.19 00:09:39.278 clat percentiles (usec): 00:09:39.278 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:09:39.278 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:09:39.278 | 70.00th=[ 2933], 80.00th=[ 3556], 90.00th=[ 4146], 95.00th=[ 5538], 00:09:39.278 | 99.00th=[ 7111], 99.50th=[ 7701], 99.90th=[ 8979], 99.95th=[ 9372], 00:09:39.278 | 99.99th=[10945] 00:09:39.278 bw ( KiB/s): min=73048, max=87336, per=98.52%, avg=80853.33, stdev=7235.25, samples=3 00:09:39.278 iops : min=18262, max=21834, avg=20213.33, stdev=1808.81, samples=3 00:09:39.278 write: IOPS=20.5k, BW=79.9MiB/s (83.8MB/s)(160MiB/2001msec); 0 zone resets 00:09:39.278 slat (nsec): min=4212, max=85820, avg=6487.70, stdev=2688.25 00:09:39.278 clat (usec): min=321, max=11078, avg=3120.04, stdev=1013.35 00:09:39.278 lat (usec): min=328, max=11098, avg=3126.53, stdev=1015.03 00:09:39.278 clat percentiles (usec): 00:09:39.278 | 1.00th=[ 2311], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2573], 00:09:39.278 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:09:39.278 | 70.00th=[ 2966], 80.00th=[ 3589], 90.00th=[ 4178], 95.00th=[ 5604], 00:09:39.278 | 99.00th=[ 7111], 99.50th=[ 7767], 99.90th=[ 8979], 99.95th=[ 9503], 00:09:39.278 | 99.99th=[10683] 00:09:39.278 bw ( KiB/s): min=73152, max=87632, per=98.95%, avg=80992.00, stdev=7314.21, samples=3 00:09:39.278 iops : min=18288, max=21908, avg=20248.00, stdev=1828.55, samples=3 00:09:39.278 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:39.278 lat (msec) : 2=0.19%, 4=88.43%, 10=11.32%, 20=0.03% 00:09:39.278 cpu : usr=99.15%, sys=0.05%, ctx=4, majf=0, minf=625 00:09:39.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:39.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.278 issued rwts: total=41054,40946,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.278 00:09:39.278 Run status group 0 (all jobs): 00:09:39.278 READ: bw=80.1MiB/s (84.0MB/s), 80.1MiB/s-80.1MiB/s (84.0MB/s-84.0MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:39.278 WRITE: bw=79.9MiB/s (83.8MB/s), 79.9MiB/s-79.9MiB/s (83.8MB/s-83.8MB/s), io=160MiB (168MB), run=2001-2001msec 00:09:39.278 ----------------------------------------------------- 00:09:39.278 Suppressions used: 00:09:39.278 count bytes template 00:09:39.278 1 32 /usr/src/fio/parse.c 00:09:39.278 1 8 libtcmalloc_minimal.so 00:09:39.278 ----------------------------------------------------- 00:09:39.278 00:09:39.278 20:51:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:39.278 20:51:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:39.278 20:51:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:39.278 20:51:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:39.278 20:51:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:39.278 20:51:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:39.278 20:51:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:39.278 20:51:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:39.278 20:51:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:39.278 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:39.278 fio-3.35 00:09:39.278 Starting 1 thread 00:09:45.866 00:09:45.866 test: (groupid=0, jobs=1): err= 0: pid=75847: Wed Nov 20 20:52:02 2024 00:09:45.866 read: IOPS=20.0k, BW=78.2MiB/s (81.9MB/s)(156MiB/2001msec) 00:09:45.866 slat (nsec): min=3923, max=77805, avg=6021.60, stdev=2497.40 00:09:45.866 clat (usec): min=239, max=10413, avg=3178.21, stdev=931.67 00:09:45.866 lat (usec): min=245, max=10478, avg=3184.23, stdev=932.94 00:09:45.866 clat percentiles (usec): 00:09:45.866 | 1.00th=[ 2311], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2638], 00:09:45.866 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2966], 00:09:45.866 | 70.00th=[ 3130], 80.00th=[ 3458], 90.00th=[ 4293], 95.00th=[ 5407], 00:09:45.866 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 8979], 00:09:45.866 | 99.99th=[10159] 00:09:45.866 bw ( KiB/s): min=76697, max=82392, per=99.51%, avg=79637.67, stdev=2852.07, samples=3 00:09:45.866 iops : min=19174, max=20598, avg=19909.33, stdev=713.15, samples=3 00:09:45.866 write: IOPS=20.0k, BW=78.0MiB/s (81.8MB/s)(156MiB/2001msec); 0 zone resets 00:09:45.866 slat (nsec): min=4133, max=59329, avg=6380.18, stdev=2577.38 00:09:45.866 clat (usec): min=228, max=10270, avg=3202.41, stdev=945.23 00:09:45.866 lat (usec): min=235, max=10291, avg=3208.79, stdev=946.55 00:09:45.866 clat percentiles (usec): 00:09:45.866 | 1.00th=[ 2343], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:09:45.866 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2966], 00:09:45.866 | 70.00th=[ 3130], 80.00th=[ 3490], 90.00th=[ 4359], 95.00th=[ 5473], 00:09:45.866 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7701], 99.95th=[ 9110], 00:09:45.866 | 99.99th=[10159] 00:09:45.866 bw ( KiB/s): min=76505, max=82288, per=99.73%, avg=79653.67, stdev=2925.61, samples=3 00:09:45.866 iops : min=19126, max=20572, avg=19913.33, stdev=731.54, samples=3 00:09:45.866 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:09:45.866 lat (msec) : 2=0.23%, 4=86.81%, 10=12.92%, 20=0.01% 00:09:45.866 cpu : usr=99.10%, sys=0.05%, ctx=2, majf=0, minf=625 00:09:45.866 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:45.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:45.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:45.866 issued rwts: total=40034,39955,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:45.866 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:45.866 00:09:45.866 Run status group 0 (all jobs): 00:09:45.866 READ: bw=78.2MiB/s (81.9MB/s), 78.2MiB/s-78.2MiB/s (81.9MB/s-81.9MB/s), io=156MiB (164MB), run=2001-2001msec 00:09:45.866 WRITE: bw=78.0MiB/s (81.8MB/s), 78.0MiB/s-78.0MiB/s (81.8MB/s-81.8MB/s), io=156MiB (164MB), run=2001-2001msec 00:09:45.866 ----------------------------------------------------- 00:09:45.866 Suppressions used: 00:09:45.866 count bytes template 00:09:45.866 1 32 /usr/src/fio/parse.c 00:09:45.866 1 8 libtcmalloc_minimal.so 00:09:45.866 ----------------------------------------------------- 00:09:45.866 00:09:45.866 20:52:03 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:45.866 20:52:03 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:45.866 20:52:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:45.866 20:52:03 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:45.867 20:52:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:45.867 20:52:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:45.867 20:52:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:45.867 20:52:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:45.867 20:52:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:45.867 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:45.867 fio-3.35 00:09:45.867 Starting 1 thread 00:09:51.155 00:09:51.155 test: (groupid=0, jobs=1): err= 0: pid=75902: Wed Nov 20 20:52:08 2024 00:09:51.155 read: IOPS=19.1k, BW=74.6MiB/s (78.2MB/s)(149MiB/2001msec) 00:09:51.155 slat (nsec): min=4236, max=67521, avg=5629.69, stdev=2918.48 00:09:51.155 clat (usec): min=221, max=17458, avg=3333.57, stdev=1211.80 00:09:51.155 lat (usec): min=225, max=17525, avg=3339.20, stdev=1213.28 00:09:51.155 clat percentiles (usec): 00:09:51.155 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:51.155 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 3032], 00:09:51.155 | 70.00th=[ 3392], 80.00th=[ 4146], 90.00th=[ 5145], 95.00th=[ 5866], 00:09:51.155 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[13960], 99.95th=[15926], 00:09:51.155 | 99.99th=[17433] 00:09:51.155 bw ( KiB/s): min=69344, max=78480, per=97.86%, avg=74765.33, stdev=4801.16, samples=3 00:09:51.155 iops : min=17336, max=19620, avg=18691.33, stdev=1200.29, samples=3 00:09:51.155 write: IOPS=19.1k, BW=74.5MiB/s (78.2MB/s)(149MiB/2001msec); 0 zone resets 00:09:51.155 slat (nsec): min=4314, max=90410, avg=5774.71, stdev=2975.28 00:09:51.155 clat (usec): min=236, max=17377, avg=3347.49, stdev=1214.39 00:09:51.155 lat (usec): min=241, max=17397, avg=3353.27, stdev=1215.83 00:09:51.155 clat percentiles (usec): 00:09:51.155 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:51.155 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 3032], 00:09:51.155 | 70.00th=[ 3425], 80.00th=[ 4146], 90.00th=[ 5145], 95.00th=[ 5866], 00:09:51.155 | 99.00th=[ 6783], 99.50th=[ 7177], 99.90th=[14615], 99.95th=[16057], 00:09:51.155 | 99.99th=[17171] 00:09:51.155 bw ( KiB/s): min=69536, max=78336, per=97.99%, avg=74800.00, stdev=4647.52, samples=3 00:09:51.155 iops : min=17384, max=19584, avg=18700.00, stdev=1161.88, samples=3 00:09:51.155 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:51.155 lat (msec) : 2=0.49%, 4=77.73%, 10=21.55%, 20=0.17% 00:09:51.155 cpu : usr=99.00%, sys=0.00%, ctx=4, majf=0, minf=624 00:09:51.155 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:51.155 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.155 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.155 issued rwts: total=38218,38187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.155 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.155 00:09:51.155 Run status group 0 (all jobs): 00:09:51.155 READ: bw=74.6MiB/s (78.2MB/s), 74.6MiB/s-74.6MiB/s (78.2MB/s-78.2MB/s), io=149MiB (157MB), run=2001-2001msec 00:09:51.155 WRITE: bw=74.5MiB/s (78.2MB/s), 74.5MiB/s-74.5MiB/s (78.2MB/s-78.2MB/s), io=149MiB (156MB), run=2001-2001msec 00:09:51.155 ----------------------------------------------------- 00:09:51.155 Suppressions used: 00:09:51.155 count bytes template 00:09:51.155 1 32 /usr/src/fio/parse.c 00:09:51.155 1 8 libtcmalloc_minimal.so 00:09:51.155 ----------------------------------------------------- 00:09:51.155 00:09:51.155 20:52:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:51.155 20:52:08 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:51.155 00:09:51.155 real 0m24.668s 00:09:51.155 user 0m16.352s 00:09:51.155 sys 0m13.998s 00:09:51.155 20:52:08 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.155 ************************************ 00:09:51.155 END TEST nvme_fio 00:09:51.155 ************************************ 00:09:51.155 20:52:08 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:51.155 ************************************ 00:09:51.155 END TEST nvme 00:09:51.155 ************************************ 00:09:51.155 00:09:51.155 real 1m32.025s 00:09:51.155 user 3m31.100s 00:09:51.156 sys 0m24.046s 00:09:51.156 20:52:08 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.156 20:52:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.156 20:52:08 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:51.156 20:52:08 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:51.156 20:52:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.156 20:52:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.156 20:52:08 -- common/autotest_common.sh@10 -- # set +x 00:09:51.156 ************************************ 00:09:51.156 START TEST nvme_scc 00:09:51.156 ************************************ 00:09:51.156 20:52:08 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:51.156 * Looking for test storage... 00:09:51.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:51.156 20:52:08 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.156 20:52:08 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.156 20:52:08 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.156 20:52:08 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:51.156 20:52:08 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.156 20:52:08 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.156 --rc genhtml_branch_coverage=1 00:09:51.156 --rc genhtml_function_coverage=1 00:09:51.156 --rc genhtml_legend=1 00:09:51.156 --rc geninfo_all_blocks=1 00:09:51.156 --rc geninfo_unexecuted_blocks=1 00:09:51.156 00:09:51.156 ' 00:09:51.156 20:52:08 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.156 --rc genhtml_branch_coverage=1 00:09:51.156 --rc genhtml_function_coverage=1 00:09:51.156 --rc genhtml_legend=1 00:09:51.156 --rc geninfo_all_blocks=1 00:09:51.156 --rc geninfo_unexecuted_blocks=1 00:09:51.156 00:09:51.156 ' 00:09:51.156 20:52:08 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.156 --rc genhtml_branch_coverage=1 00:09:51.156 --rc genhtml_function_coverage=1 00:09:51.156 --rc genhtml_legend=1 00:09:51.156 --rc geninfo_all_blocks=1 00:09:51.156 --rc geninfo_unexecuted_blocks=1 00:09:51.156 00:09:51.156 ' 00:09:51.156 20:52:08 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.156 --rc genhtml_branch_coverage=1 00:09:51.156 --rc genhtml_function_coverage=1 00:09:51.156 --rc genhtml_legend=1 00:09:51.156 --rc geninfo_all_blocks=1 00:09:51.156 --rc geninfo_unexecuted_blocks=1 00:09:51.156 00:09:51.156 ' 00:09:51.156 20:52:08 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.156 20:52:08 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.156 20:52:08 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.156 20:52:08 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.156 20:52:08 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.156 20:52:08 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:51.156 20:52:08 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:51.156 20:52:08 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:51.156 20:52:08 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.156 20:52:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:51.156 20:52:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:51.156 20:52:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:51.156 20:52:08 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:51.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.416 Waiting for block devices as requested 00:09:51.416 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.416 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.416 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.678 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:56.972 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:56.972 20:52:14 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:56.972 20:52:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.972 20:52:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:56.972 20:52:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.972 20:52:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:56.972 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:56.973 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:56.974 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.975 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:56.976 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.977 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.978 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:56.979 20:52:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.979 20:52:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:56.979 20:52:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.979 20:52:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.979 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.980 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.981 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.982 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.983 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:56.984 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:56.985 20:52:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.985 20:52:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:56.985 20:52:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.985 20:52:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:56.985 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.986 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:56.987 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.988 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.989 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:56.990 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:56.991 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.992 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.993 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:56.994 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:56.995 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:56.996 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:56.997 20:52:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:56.998 20:52:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:56.998 20:52:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:56.998 20:52:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.998 20:52:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.998 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:56.999 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:57.000 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:57.260 20:52:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:57.260 20:52:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:57.261 20:52:15 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:57.261 20:52:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:57.261 20:52:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:57.261 20:52:15 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:57.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:58.086 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.086 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.086 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.086 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:58.086 20:52:16 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:58.086 20:52:16 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:58.086 20:52:16 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.086 20:52:16 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:58.344 ************************************ 00:09:58.344 START TEST nvme_simple_copy 00:09:58.344 ************************************ 00:09:58.344 20:52:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:58.344 Initializing NVMe Controllers 00:09:58.344 Attaching to 0000:00:10.0 00:09:58.344 Controller supports SCC. Attached to 0000:00:10.0 00:09:58.344 Namespace ID: 1 size: 6GB 00:09:58.344 Initialization complete. 00:09:58.344 00:09:58.344 Controller QEMU NVMe Ctrl (12340 ) 00:09:58.344 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:58.344 Namespace Block Size:4096 00:09:58.344 Writing LBAs 0 to 63 with Random Data 00:09:58.344 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:58.344 LBAs matching Written Data: 64 00:09:58.344 00:09:58.344 real 0m0.244s 00:09:58.344 user 0m0.096s 00:09:58.344 sys 0m0.047s 00:09:58.344 20:52:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.344 ************************************ 00:09:58.344 END TEST nvme_simple_copy 00:09:58.344 20:52:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:58.344 ************************************ 00:09:58.603 00:09:58.603 real 0m7.827s 00:09:58.603 user 0m1.078s 00:09:58.603 sys 0m1.519s 00:09:58.603 ************************************ 00:09:58.603 END TEST nvme_scc 00:09:58.603 ************************************ 00:09:58.603 20:52:16 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.603 20:52:16 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:58.603 20:52:16 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:58.603 20:52:16 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:58.603 20:52:16 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:58.603 20:52:16 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:58.603 20:52:16 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:58.603 20:52:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.603 20:52:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.603 20:52:16 -- common/autotest_common.sh@10 -- # set +x 00:09:58.603 ************************************ 00:09:58.603 START TEST nvme_fdp 00:09:58.603 ************************************ 00:09:58.603 20:52:16 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:58.603 * Looking for test storage... 00:09:58.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:58.603 20:52:16 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:58.603 20:52:16 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:58.603 20:52:16 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:58.603 20:52:16 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.603 20:52:16 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:58.603 20:52:16 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.603 20:52:16 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:58.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.603 --rc genhtml_branch_coverage=1 00:09:58.603 --rc genhtml_function_coverage=1 00:09:58.603 --rc genhtml_legend=1 00:09:58.603 --rc geninfo_all_blocks=1 00:09:58.603 --rc geninfo_unexecuted_blocks=1 00:09:58.603 00:09:58.603 ' 00:09:58.603 20:52:16 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:58.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.603 --rc genhtml_branch_coverage=1 00:09:58.603 --rc genhtml_function_coverage=1 00:09:58.603 --rc genhtml_legend=1 00:09:58.603 --rc geninfo_all_blocks=1 00:09:58.603 --rc geninfo_unexecuted_blocks=1 00:09:58.603 00:09:58.603 ' 00:09:58.603 20:52:16 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:58.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.603 --rc genhtml_branch_coverage=1 00:09:58.603 --rc genhtml_function_coverage=1 00:09:58.603 --rc genhtml_legend=1 00:09:58.603 --rc geninfo_all_blocks=1 00:09:58.603 --rc geninfo_unexecuted_blocks=1 00:09:58.603 00:09:58.603 ' 00:09:58.603 20:52:16 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:58.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.603 --rc genhtml_branch_coverage=1 00:09:58.603 --rc genhtml_function_coverage=1 00:09:58.603 --rc genhtml_legend=1 00:09:58.603 --rc geninfo_all_blocks=1 00:09:58.603 --rc geninfo_unexecuted_blocks=1 00:09:58.603 00:09:58.603 ' 00:09:58.603 20:52:16 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:58.603 20:52:16 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:58.604 20:52:16 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:58.604 20:52:16 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:58.604 20:52:16 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:58.604 20:52:16 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:58.604 20:52:16 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.604 20:52:16 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.604 20:52:16 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.604 20:52:16 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:58.604 20:52:16 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:58.604 20:52:16 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:58.604 20:52:16 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:58.604 20:52:16 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:59.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:59.173 Waiting for block devices as requested 00:09:59.173 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.431 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.431 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.431 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:04.708 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:04.708 20:52:22 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:04.708 20:52:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.708 20:52:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:04.708 20:52:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.708 20:52:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.708 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.709 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.710 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.711 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:04.712 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.713 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.714 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:04.715 20:52:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.715 20:52:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:04.715 20:52:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.715 20:52:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.715 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.716 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.717 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.718 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.719 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:04.720 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:04.721 20:52:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.721 20:52:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:04.721 20:52:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.721 20:52:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.721 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.722 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.723 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.724 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.725 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.726 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.727 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.728 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.729 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.730 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.731 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.732 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.993 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:04.994 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:04.995 20:52:22 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:04.995 20:52:22 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:04.995 20:52:22 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:04.995 20:52:22 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.995 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:04.996 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:04.997 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:04.998 20:52:22 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:04.998 20:52:22 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:04.999 20:52:22 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:04.999 20:52:22 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:04.999 20:52:22 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:04.999 20:52:22 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:05.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.824 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.824 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.824 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.824 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:05.824 20:52:23 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:05.824 20:52:23 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:05.824 20:52:23 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.824 20:52:23 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:05.824 ************************************ 00:10:05.824 START TEST nvme_flexible_data_placement 00:10:05.824 ************************************ 00:10:05.824 20:52:23 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:06.083 Initializing NVMe Controllers 00:10:06.083 Attaching to 0000:00:13.0 00:10:06.083 Controller supports FDP Attached to 0000:00:13.0 00:10:06.083 Namespace ID: 1 Endurance Group ID: 1 00:10:06.083 Initialization complete. 00:10:06.083 00:10:06.083 ================================== 00:10:06.083 == FDP tests for Namespace: #01 == 00:10:06.083 ================================== 00:10:06.083 00:10:06.083 Get Feature: FDP: 00:10:06.083 ================= 00:10:06.083 Enabled: Yes 00:10:06.083 FDP configuration Index: 0 00:10:06.083 00:10:06.083 FDP configurations log page 00:10:06.083 =========================== 00:10:06.083 Number of FDP configurations: 1 00:10:06.083 Version: 0 00:10:06.083 Size: 112 00:10:06.083 FDP Configuration Descriptor: 0 00:10:06.083 Descriptor Size: 96 00:10:06.083 Reclaim Group Identifier format: 2 00:10:06.083 FDP Volatile Write Cache: Not Present 00:10:06.083 FDP Configuration: Valid 00:10:06.083 Vendor Specific Size: 0 00:10:06.083 Number of Reclaim Groups: 2 00:10:06.083 Number of Recalim Unit Handles: 8 00:10:06.083 Max Placement Identifiers: 128 00:10:06.083 Number of Namespaces Suppprted: 256 00:10:06.083 Reclaim unit Nominal Size: 6000000 bytes 00:10:06.083 Estimated Reclaim Unit Time Limit: Not Reported 00:10:06.083 RUH Desc #000: RUH Type: Initially Isolated 00:10:06.083 RUH Desc #001: RUH Type: Initially Isolated 00:10:06.083 RUH Desc #002: RUH Type: Initially Isolated 00:10:06.083 RUH Desc #003: RUH Type: Initially Isolated 00:10:06.083 RUH Desc #004: RUH Type: Initially Isolated 00:10:06.083 RUH Desc #005: RUH Type: Initially Isolated 00:10:06.083 RUH Desc #006: RUH Type: Initially Isolated 00:10:06.083 RUH Desc #007: RUH Type: Initially Isolated 00:10:06.083 00:10:06.083 FDP reclaim unit handle usage log page 00:10:06.083 ====================================== 00:10:06.083 Number of Reclaim Unit Handles: 8 00:10:06.083 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:06.083 RUH Usage Desc #001: RUH Attributes: Unused 00:10:06.083 RUH Usage Desc #002: RUH Attributes: Unused 00:10:06.083 RUH Usage Desc #003: RUH Attributes: Unused 00:10:06.083 RUH Usage Desc #004: RUH Attributes: Unused 00:10:06.083 RUH Usage Desc #005: RUH Attributes: Unused 00:10:06.083 RUH Usage Desc #006: RUH Attributes: Unused 00:10:06.083 RUH Usage Desc #007: RUH Attributes: Unused 00:10:06.083 00:10:06.083 FDP statistics log page 00:10:06.083 ======================= 00:10:06.083 Host bytes with metadata written: 2109562880 00:10:06.083 Media bytes with metadata written: 2110693376 00:10:06.083 Media bytes erased: 0 00:10:06.083 00:10:06.083 FDP Reclaim unit handle status 00:10:06.083 ============================== 00:10:06.083 Number of RUHS descriptors: 2 00:10:06.083 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000042a 00:10:06.083 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:06.083 00:10:06.083 FDP write on placement id: 0 success 00:10:06.083 00:10:06.083 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:06.083 00:10:06.083 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:06.083 00:10:06.083 Get Feature: FDP Events for Placement handle: #0 00:10:06.083 ======================== 00:10:06.083 Number of FDP Events: 6 00:10:06.083 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:06.083 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:06.083 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:06.083 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:06.083 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:06.083 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:06.083 00:10:06.083 FDP events log page 00:10:06.083 =================== 00:10:06.083 Number of FDP events: 1 00:10:06.083 FDP Event #0: 00:10:06.083 Event Type: RU Not Written to Capacity 00:10:06.083 Placement Identifier: Valid 00:10:06.083 NSID: Valid 00:10:06.083 Location: Valid 00:10:06.083 Placement Identifier: 0 00:10:06.083 Event Timestamp: 3 00:10:06.084 Namespace Identifier: 1 00:10:06.084 Reclaim Group Identifier: 0 00:10:06.084 Reclaim Unit Handle Identifier: 0 00:10:06.084 00:10:06.084 FDP test passed 00:10:06.084 00:10:06.084 real 0m0.212s 00:10:06.084 user 0m0.061s 00:10:06.084 sys 0m0.050s 00:10:06.084 20:52:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.084 20:52:24 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:06.084 ************************************ 00:10:06.084 END TEST nvme_flexible_data_placement 00:10:06.084 ************************************ 00:10:06.084 00:10:06.084 real 0m7.641s 00:10:06.084 user 0m1.027s 00:10:06.084 sys 0m1.508s 00:10:06.084 20:52:24 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.084 20:52:24 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:06.084 ************************************ 00:10:06.084 END TEST nvme_fdp 00:10:06.084 ************************************ 00:10:06.343 20:52:24 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:06.343 20:52:24 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:06.343 20:52:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.343 20:52:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.343 20:52:24 -- common/autotest_common.sh@10 -- # set +x 00:10:06.343 ************************************ 00:10:06.343 START TEST nvme_rpc 00:10:06.343 ************************************ 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:06.343 * Looking for test storage... 00:10:06.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:06.343 20:52:24 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:06.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.343 --rc genhtml_branch_coverage=1 00:10:06.343 --rc genhtml_function_coverage=1 00:10:06.343 --rc genhtml_legend=1 00:10:06.343 --rc geninfo_all_blocks=1 00:10:06.343 --rc geninfo_unexecuted_blocks=1 00:10:06.343 00:10:06.343 ' 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:06.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.343 --rc genhtml_branch_coverage=1 00:10:06.343 --rc genhtml_function_coverage=1 00:10:06.343 --rc genhtml_legend=1 00:10:06.343 --rc geninfo_all_blocks=1 00:10:06.343 --rc geninfo_unexecuted_blocks=1 00:10:06.343 00:10:06.343 ' 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:06.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.343 --rc genhtml_branch_coverage=1 00:10:06.343 --rc genhtml_function_coverage=1 00:10:06.343 --rc genhtml_legend=1 00:10:06.343 --rc geninfo_all_blocks=1 00:10:06.343 --rc geninfo_unexecuted_blocks=1 00:10:06.343 00:10:06.343 ' 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:06.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:06.343 --rc genhtml_branch_coverage=1 00:10:06.343 --rc genhtml_function_coverage=1 00:10:06.343 --rc genhtml_legend=1 00:10:06.343 --rc geninfo_all_blocks=1 00:10:06.343 --rc geninfo_unexecuted_blocks=1 00:10:06.343 00:10:06.343 ' 00:10:06.343 20:52:24 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.343 20:52:24 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:06.343 20:52:24 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:06.343 20:52:24 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77285 00:10:06.343 20:52:24 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:06.343 20:52:24 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:06.343 20:52:24 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77285 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77285 ']' 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.343 20:52:24 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.602 [2024-11-20 20:52:24.493400] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:06.602 [2024-11-20 20:52:24.493529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77285 ] 00:10:06.602 [2024-11-20 20:52:24.638304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:06.602 [2024-11-20 20:52:24.664192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.602 [2024-11-20 20:52:24.664229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.537 20:52:25 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.537 20:52:25 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:07.537 20:52:25 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:07.537 Nvme0n1 00:10:07.537 20:52:25 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:07.537 20:52:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:07.796 request: 00:10:07.796 { 00:10:07.796 "bdev_name": "Nvme0n1", 00:10:07.796 "filename": "non_existing_file", 00:10:07.796 "method": "bdev_nvme_apply_firmware", 00:10:07.796 "req_id": 1 00:10:07.796 } 00:10:07.796 Got JSON-RPC error response 00:10:07.796 response: 00:10:07.796 { 00:10:07.796 "code": -32603, 00:10:07.796 "message": "open file failed." 00:10:07.796 } 00:10:07.796 20:52:25 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:07.796 20:52:25 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:07.796 20:52:25 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:08.055 20:52:25 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:08.055 20:52:25 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77285 00:10:08.055 20:52:25 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77285 ']' 00:10:08.055 20:52:25 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77285 00:10:08.055 20:52:25 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:08.055 20:52:25 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.055 20:52:25 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77285 00:10:08.055 killing process with pid 77285 00:10:08.055 20:52:25 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:08.055 20:52:25 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:08.055 20:52:25 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77285' 00:10:08.055 20:52:25 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77285 00:10:08.055 20:52:25 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77285 00:10:08.313 ************************************ 00:10:08.313 END TEST nvme_rpc 00:10:08.313 ************************************ 00:10:08.313 00:10:08.313 real 0m2.098s 00:10:08.313 user 0m4.050s 00:10:08.313 sys 0m0.511s 00:10:08.313 20:52:26 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.313 20:52:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.313 20:52:26 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:08.313 20:52:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.313 20:52:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.313 20:52:26 -- common/autotest_common.sh@10 -- # set +x 00:10:08.313 ************************************ 00:10:08.313 START TEST nvme_rpc_timeouts 00:10:08.313 ************************************ 00:10:08.313 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:08.313 * Looking for test storage... 00:10:08.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:08.313 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.313 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:08.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.573 20:52:26 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.573 --rc genhtml_branch_coverage=1 00:10:08.573 --rc genhtml_function_coverage=1 00:10:08.573 --rc genhtml_legend=1 00:10:08.573 --rc geninfo_all_blocks=1 00:10:08.573 --rc geninfo_unexecuted_blocks=1 00:10:08.573 00:10:08.573 ' 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.573 --rc genhtml_branch_coverage=1 00:10:08.573 --rc genhtml_function_coverage=1 00:10:08.573 --rc genhtml_legend=1 00:10:08.573 --rc geninfo_all_blocks=1 00:10:08.573 --rc geninfo_unexecuted_blocks=1 00:10:08.573 00:10:08.573 ' 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.573 --rc genhtml_branch_coverage=1 00:10:08.573 --rc genhtml_function_coverage=1 00:10:08.573 --rc genhtml_legend=1 00:10:08.573 --rc geninfo_all_blocks=1 00:10:08.573 --rc geninfo_unexecuted_blocks=1 00:10:08.573 00:10:08.573 ' 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.573 --rc genhtml_branch_coverage=1 00:10:08.573 --rc genhtml_function_coverage=1 00:10:08.573 --rc genhtml_legend=1 00:10:08.573 --rc geninfo_all_blocks=1 00:10:08.573 --rc geninfo_unexecuted_blocks=1 00:10:08.573 00:10:08.573 ' 00:10:08.573 20:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:08.573 20:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77339 00:10:08.573 20:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77339 00:10:08.573 20:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77371 00:10:08.573 20:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:08.573 20:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77371 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77371 ']' 00:10:08.573 20:52:26 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.573 20:52:26 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:08.573 [2024-11-20 20:52:26.566016] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:10:08.573 [2024-11-20 20:52:26.566132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77371 ] 00:10:08.832 [2024-11-20 20:52:26.708090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:08.832 [2024-11-20 20:52:26.734086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.832 [2024-11-20 20:52:26.734126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.400 20:52:27 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.400 Checking default timeout settings: 00:10:09.400 20:52:27 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:09.400 20:52:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:09.400 20:52:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:09.658 Making settings changes with rpc: 00:10:09.658 20:52:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:09.658 20:52:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:09.916 Check default vs. modified settings: 00:10:09.916 20:52:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:09.916 20:52:27 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77339 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77339 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.175 Setting action_on_timeout is changed as expected. 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77339 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77339 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.175 Setting timeout_us is changed as expected. 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77339 00:10:10.175 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77339 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:10.436 Setting timeout_admin_us is changed as expected. 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77339 /tmp/settings_modified_77339 00:10:10.436 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77371 00:10:10.436 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77371 ']' 00:10:10.436 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77371 00:10:10.436 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:10.436 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.436 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77371 00:10:10.436 killing process with pid 77371 00:10:10.436 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.436 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.436 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77371' 00:10:10.436 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77371 00:10:10.436 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77371 00:10:10.696 RPC TIMEOUT SETTING TEST PASSED. 00:10:10.696 20:52:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:10.696 ************************************ 00:10:10.696 END TEST nvme_rpc_timeouts 00:10:10.696 ************************************ 00:10:10.696 00:10:10.696 real 0m2.283s 00:10:10.696 user 0m4.589s 00:10:10.696 sys 0m0.499s 00:10:10.696 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.696 20:52:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:10.696 20:52:28 -- spdk/autotest.sh@239 -- # uname -s 00:10:10.696 20:52:28 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:10.696 20:52:28 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:10.696 20:52:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:10.696 20:52:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.696 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:10:10.696 ************************************ 00:10:10.696 START TEST sw_hotplug 00:10:10.696 ************************************ 00:10:10.696 20:52:28 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:10.696 * Looking for test storage... 00:10:10.696 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:10.696 20:52:28 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:10.696 20:52:28 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:10.696 20:52:28 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:10.956 20:52:28 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.956 20:52:28 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:10.956 20:52:28 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.956 20:52:28 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.956 --rc genhtml_branch_coverage=1 00:10:10.956 --rc genhtml_function_coverage=1 00:10:10.956 --rc genhtml_legend=1 00:10:10.956 --rc geninfo_all_blocks=1 00:10:10.956 --rc geninfo_unexecuted_blocks=1 00:10:10.956 00:10:10.956 ' 00:10:10.956 20:52:28 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.956 --rc genhtml_branch_coverage=1 00:10:10.956 --rc genhtml_function_coverage=1 00:10:10.956 --rc genhtml_legend=1 00:10:10.956 --rc geninfo_all_blocks=1 00:10:10.956 --rc geninfo_unexecuted_blocks=1 00:10:10.956 00:10:10.956 ' 00:10:10.956 20:52:28 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.956 --rc genhtml_branch_coverage=1 00:10:10.956 --rc genhtml_function_coverage=1 00:10:10.956 --rc genhtml_legend=1 00:10:10.956 --rc geninfo_all_blocks=1 00:10:10.956 --rc geninfo_unexecuted_blocks=1 00:10:10.956 00:10:10.956 ' 00:10:10.956 20:52:28 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:10.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.956 --rc genhtml_branch_coverage=1 00:10:10.956 --rc genhtml_function_coverage=1 00:10:10.956 --rc genhtml_legend=1 00:10:10.956 --rc geninfo_all_blocks=1 00:10:10.956 --rc geninfo_unexecuted_blocks=1 00:10:10.956 00:10:10.956 ' 00:10:10.956 20:52:28 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:11.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.216 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.216 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.216 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.216 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:11.216 20:52:29 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:11.216 20:52:29 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:11.216 20:52:29 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:11.216 20:52:29 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:11.216 20:52:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.217 20:52:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:11.476 20:52:29 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:11.476 20:52:29 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:11.476 20:52:29 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:11.476 20:52:29 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:11.734 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:11.734 Waiting for block devices as requested 00:10:11.734 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.992 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.992 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.992 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:17.261 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:17.261 20:52:35 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:17.261 20:52:35 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:17.519 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:17.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:17.519 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:17.777 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:18.036 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.036 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:18.036 20:52:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78219 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:18.036 20:52:36 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:18.036 20:52:36 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:18.036 20:52:36 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:18.036 20:52:36 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:18.036 20:52:36 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:18.036 20:52:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:18.295 Initializing NVMe Controllers 00:10:18.295 Attaching to 0000:00:10.0 00:10:18.295 Attaching to 0000:00:11.0 00:10:18.295 Attached to 0000:00:10.0 00:10:18.295 Attached to 0000:00:11.0 00:10:18.295 Initialization complete. Starting I/O... 00:10:18.295 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:18.295 QEMU NVMe Ctrl (12341 ): 2 I/Os completed (+2) 00:10:18.295 00:10:19.229 QEMU NVMe Ctrl (12340 ): 3621 I/Os completed (+3621) 00:10:19.229 QEMU NVMe Ctrl (12341 ): 3433 I/Os completed (+3431) 00:10:19.229 00:10:20.605 QEMU NVMe Ctrl (12340 ): 7783 I/Os completed (+4162) 00:10:20.605 QEMU NVMe Ctrl (12341 ): 7642 I/Os completed (+4209) 00:10:20.605 00:10:21.539 QEMU NVMe Ctrl (12340 ): 11917 I/Os completed (+4134) 00:10:21.539 QEMU NVMe Ctrl (12341 ): 11786 I/Os completed (+4144) 00:10:21.539 00:10:22.475 QEMU NVMe Ctrl (12340 ): 16104 I/Os completed (+4187) 00:10:22.475 QEMU NVMe Ctrl (12341 ): 15925 I/Os completed (+4139) 00:10:22.475 00:10:23.427 QEMU NVMe Ctrl (12340 ): 20459 I/Os completed (+4355) 00:10:23.427 QEMU NVMe Ctrl (12341 ): 20291 I/Os completed (+4366) 00:10:23.427 00:10:24.381 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.381 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.381 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.381 [2024-11-20 20:52:42.137474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:24.381 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:24.381 [2024-11-20 20:52:42.138593] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 [2024-11-20 20:52:42.138647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 [2024-11-20 20:52:42.138663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 [2024-11-20 20:52:42.138679] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.381 [2024-11-20 20:52:42.140074] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 [2024-11-20 20:52:42.140115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 [2024-11-20 20:52:42.140128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 [2024-11-20 20:52:42.140146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.381 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.381 [2024-11-20 20:52:42.160703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:24.381 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:24.381 [2024-11-20 20:52:42.161690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 [2024-11-20 20:52:42.161732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 [2024-11-20 20:52:42.161772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 [2024-11-20 20:52:42.161788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.381 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.382 [2024-11-20 20:52:42.162897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.382 [2024-11-20 20:52:42.162941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.382 [2024-11-20 20:52:42.162973] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.382 [2024-11-20 20:52:42.162987] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.382 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.382 Attaching to 0000:00:10.0 00:10:24.382 Attached to 0000:00:10.0 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.382 20:52:42 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.382 Attaching to 0000:00:11.0 00:10:24.382 Attached to 0000:00:11.0 00:10:25.323 QEMU NVMe Ctrl (12340 ): 3120 I/Os completed (+3120) 00:10:25.323 QEMU NVMe Ctrl (12341 ): 2922 I/Os completed (+2922) 00:10:25.323 00:10:26.263 QEMU NVMe Ctrl (12340 ): 6886 I/Os completed (+3766) 00:10:26.263 QEMU NVMe Ctrl (12341 ): 6674 I/Os completed (+3752) 00:10:26.263 00:10:27.206 QEMU NVMe Ctrl (12340 ): 9709 I/Os completed (+2823) 00:10:27.206 QEMU NVMe Ctrl (12341 ): 9560 I/Os completed (+2886) 00:10:27.206 00:10:28.590 QEMU NVMe Ctrl (12340 ): 12671 I/Os completed (+2962) 00:10:28.590 QEMU NVMe Ctrl (12341 ): 12482 I/Os completed (+2922) 00:10:28.590 00:10:29.534 QEMU NVMe Ctrl (12340 ): 18573 I/Os completed (+5902) 00:10:29.534 QEMU NVMe Ctrl (12341 ): 19168 I/Os completed (+6686) 00:10:29.534 00:10:30.472 QEMU NVMe Ctrl (12340 ): 24179 I/Os completed (+5606) 00:10:30.472 QEMU NVMe Ctrl (12341 ): 24764 I/Os completed (+5596) 00:10:30.472 00:10:31.404 QEMU NVMe Ctrl (12340 ): 28375 I/Os completed (+4196) 00:10:31.404 QEMU NVMe Ctrl (12341 ): 28826 I/Os completed (+4062) 00:10:31.404 00:10:32.341 QEMU NVMe Ctrl (12340 ): 33091 I/Os completed (+4716) 00:10:32.341 QEMU NVMe Ctrl (12341 ): 33948 I/Os completed (+5122) 00:10:32.341 00:10:33.279 QEMU NVMe Ctrl (12340 ): 37503 I/Os completed (+4412) 00:10:33.279 QEMU NVMe Ctrl (12341 ): 38288 I/Os completed (+4340) 00:10:33.279 00:10:34.213 QEMU NVMe Ctrl (12340 ): 41616 I/Os completed (+4113) 00:10:34.213 QEMU NVMe Ctrl (12341 ): 42378 I/Os completed (+4090) 00:10:34.213 00:10:35.612 QEMU NVMe Ctrl (12340 ): 45769 I/Os completed (+4153) 00:10:35.612 QEMU NVMe Ctrl (12341 ): 46487 I/Os completed (+4109) 00:10:35.612 00:10:36.199 QEMU NVMe Ctrl (12340 ): 49962 I/Os completed (+4193) 00:10:36.199 QEMU NVMe Ctrl (12341 ): 50800 I/Os completed (+4313) 00:10:36.199 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.458 [2024-11-20 20:52:54.465370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:36.458 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:36.458 [2024-11-20 20:52:54.466162] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.466196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.466209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.466225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:36.458 [2024-11-20 20:52:54.467232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.467272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.467284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.467299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:36.458 EAL: Scan for (pci) bus failed. 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.458 [2024-11-20 20:52:54.484881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:36.458 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:36.458 [2024-11-20 20:52:54.485615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.485641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.485654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.485667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:36.458 [2024-11-20 20:52:54.486524] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.486551] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.486566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 [2024-11-20 20:52:54.486579] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.458 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:36.718 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:36.718 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.718 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.718 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.718 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:36.718 Attaching to 0000:00:10.0 00:10:36.718 Attached to 0000:00:10.0 00:10:36.718 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:36.718 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.718 20:52:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:36.718 Attaching to 0000:00:11.0 00:10:36.718 Attached to 0000:00:11.0 00:10:37.283 QEMU NVMe Ctrl (12340 ): 2824 I/Os completed (+2824) 00:10:37.284 QEMU NVMe Ctrl (12341 ): 2414 I/Os completed (+2414) 00:10:37.284 00:10:38.219 QEMU NVMe Ctrl (12340 ): 7332 I/Os completed (+4508) 00:10:38.219 QEMU NVMe Ctrl (12341 ): 7142 I/Os completed (+4728) 00:10:38.219 00:10:39.593 QEMU NVMe Ctrl (12340 ): 11572 I/Os completed (+4240) 00:10:39.593 QEMU NVMe Ctrl (12341 ): 11281 I/Os completed (+4139) 00:10:39.593 00:10:40.537 QEMU NVMe Ctrl (12340 ): 16349 I/Os completed (+4777) 00:10:40.537 QEMU NVMe Ctrl (12341 ): 16657 I/Os completed (+5376) 00:10:40.537 00:10:41.471 QEMU NVMe Ctrl (12340 ): 20972 I/Os completed (+4623) 00:10:41.471 QEMU NVMe Ctrl (12341 ): 22104 I/Os completed (+5447) 00:10:41.471 00:10:42.414 QEMU NVMe Ctrl (12340 ): 25680 I/Os completed (+4708) 00:10:42.414 QEMU NVMe Ctrl (12341 ): 27281 I/Os completed (+5177) 00:10:42.414 00:10:43.357 QEMU NVMe Ctrl (12340 ): 30664 I/Os completed (+4984) 00:10:43.357 QEMU NVMe Ctrl (12341 ): 32512 I/Os completed (+5231) 00:10:43.357 00:10:44.299 QEMU NVMe Ctrl (12340 ): 35845 I/Os completed (+5181) 00:10:44.299 QEMU NVMe Ctrl (12341 ): 38464 I/Os completed (+5952) 00:10:44.299 00:10:45.243 QEMU NVMe Ctrl (12340 ): 39409 I/Os completed (+3564) 00:10:45.243 QEMU NVMe Ctrl (12341 ): 42037 I/Os completed (+3573) 00:10:45.243 00:10:46.621 QEMU NVMe Ctrl (12340 ): 44325 I/Os completed (+4916) 00:10:46.621 QEMU NVMe Ctrl (12341 ): 47433 I/Os completed (+5396) 00:10:46.621 00:10:47.555 QEMU NVMe Ctrl (12340 ): 48479 I/Os completed (+4154) 00:10:47.555 QEMU NVMe Ctrl (12341 ): 51558 I/Os completed (+4125) 00:10:47.555 00:10:48.491 QEMU NVMe Ctrl (12340 ): 52676 I/Os completed (+4197) 00:10:48.491 QEMU NVMe Ctrl (12341 ): 55682 I/Os completed (+4124) 00:10:48.491 00:10:48.750 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:48.750 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:48.750 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:48.750 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:48.750 [2024-11-20 20:53:06.720290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:48.750 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:48.750 [2024-11-20 20:53:06.721434] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 [2024-11-20 20:53:06.721643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 [2024-11-20 20:53:06.721681] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 [2024-11-20 20:53:06.721772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:48.750 [2024-11-20 20:53:06.723163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 [2024-11-20 20:53:06.723264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 [2024-11-20 20:53:06.723283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 [2024-11-20 20:53:06.723299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:48.750 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:48.750 [2024-11-20 20:53:06.743373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:48.750 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:48.750 [2024-11-20 20:53:06.744122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 [2024-11-20 20:53:06.744150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 [2024-11-20 20:53:06.744165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 [2024-11-20 20:53:06.744179] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.750 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:48.750 [2024-11-20 20:53:06.745040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.751 [2024-11-20 20:53:06.745069] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.751 [2024-11-20 20:53:06.745081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.751 [2024-11-20 20:53:06.745091] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.751 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:48.751 EAL: Scan for (pci) bus failed. 00:10:48.751 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:48.751 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:48.751 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.751 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.751 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:49.012 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:49.012 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.012 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.012 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.012 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:49.012 Attaching to 0000:00:10.0 00:10:49.012 Attached to 0000:00:10.0 00:10:49.012 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:49.012 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.012 20:53:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:49.012 Attaching to 0000:00:11.0 00:10:49.012 Attached to 0000:00:11.0 00:10:49.012 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:49.012 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:49.012 [2024-11-20 20:53:06.992571] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:01.274 20:53:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:01.274 20:53:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:01.274 20:53:18 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.86 00:11:01.274 20:53:18 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.86 00:11:01.274 20:53:18 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:01.274 20:53:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.86 00:11:01.274 20:53:18 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.86 2 00:11:01.274 remove_attach_helper took 42.86s to complete (handling 2 nvme drive(s)) 20:53:18 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:07.857 20:53:24 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78219 00:11:07.857 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78219) - No such process 00:11:07.858 20:53:24 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78219 00:11:07.858 20:53:24 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:07.858 20:53:24 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:07.858 20:53:24 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:07.858 20:53:24 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=78767 00:11:07.858 20:53:24 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:07.858 20:53:25 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 78767 00:11:07.858 20:53:24 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 78767 ']' 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.858 [2024-11-20 20:53:25.073264] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:11:07.858 [2024-11-20 20:53:25.073569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78767 ] 00:11:07.858 [2024-11-20 20:53:25.213702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.858 [2024-11-20 20:53:25.237059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:07.858 20:53:25 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.858 20:53:25 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:07.858 20:53:25 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:07.858 20:53:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:07.858 20:53:25 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:07.858 20:53:25 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:07.858 20:53:25 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:07.858 20:53:25 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:07.858 20:53:25 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:07.858 20:53:25 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.421 20:53:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.421 20:53:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.421 20:53:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:14.421 20:53:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:14.421 [2024-11-20 20:53:32.005475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:14.421 [2024-11-20 20:53:32.006887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.421 [2024-11-20 20:53:32.007032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.421 [2024-11-20 20:53:32.007055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.421 [2024-11-20 20:53:32.007075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.421 [2024-11-20 20:53:32.007087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.421 [2024-11-20 20:53:32.007096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.421 [2024-11-20 20:53:32.007108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.421 [2024-11-20 20:53:32.007117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.421 [2024-11-20 20:53:32.007127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.421 [2024-11-20 20:53:32.007135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.421 [2024-11-20 20:53:32.007145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.421 [2024-11-20 20:53:32.007153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.421 [2024-11-20 20:53:32.405459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:14.421 [2024-11-20 20:53:32.406530] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.421 [2024-11-20 20:53:32.406564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.421 [2024-11-20 20:53:32.406573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.421 [2024-11-20 20:53:32.406585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.421 [2024-11-20 20:53:32.406592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.421 [2024-11-20 20:53:32.406600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.421 [2024-11-20 20:53:32.406607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.421 [2024-11-20 20:53:32.406614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.421 [2024-11-20 20:53:32.406621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.421 [2024-11-20 20:53:32.406631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.421 [2024-11-20 20:53:32.406637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:14.421 [2024-11-20 20:53:32.406645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:14.421 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:14.421 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:14.421 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:14.422 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:14.422 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:14.422 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:14.422 20:53:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.422 20:53:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:14.422 20:53:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.680 20:53:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.962 20:53:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.962 20:53:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.962 20:53:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.962 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.962 20:53:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.962 20:53:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.963 20:53:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.963 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:26.963 20:53:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:26.963 [2024-11-20 20:53:44.905665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:26.963 [2024-11-20 20:53:44.906861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.963 [2024-11-20 20:53:44.906894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.963 [2024-11-20 20:53:44.906910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.963 [2024-11-20 20:53:44.906924] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.963 [2024-11-20 20:53:44.906933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.963 [2024-11-20 20:53:44.906941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.963 [2024-11-20 20:53:44.906950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.963 [2024-11-20 20:53:44.906957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.963 [2024-11-20 20:53:44.906966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.963 [2024-11-20 20:53:44.906972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.963 [2024-11-20 20:53:44.906982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.963 [2024-11-20 20:53:44.906989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.559 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:27.559 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:27.559 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:27.559 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:27.559 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:27.559 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:27.559 20:53:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.559 20:53:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:27.559 [2024-11-20 20:53:45.406043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:27.559 [2024-11-20 20:53:45.407436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.559 [2024-11-20 20:53:45.407473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.559 [2024-11-20 20:53:45.407485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.559 [2024-11-20 20:53:45.407501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.559 [2024-11-20 20:53:45.407509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.559 [2024-11-20 20:53:45.407517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.559 [2024-11-20 20:53:45.407525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.559 [2024-11-20 20:53:45.407534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.559 [2024-11-20 20:53:45.407541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.559 [2024-11-20 20:53:45.407549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:27.559 [2024-11-20 20:53:45.407555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:27.559 [2024-11-20 20:53:45.407565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:27.559 20:53:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.559 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:27.559 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:28.125 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:28.125 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:28.125 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:28.125 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.125 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.125 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.125 20:53:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.125 20:53:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.125 20:53:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.125 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:28.125 20:53:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:28.125 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:28.125 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:28.125 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:28.125 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:28.125 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:28.125 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:28.125 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:28.125 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:28.384 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:28.384 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:28.384 20:53:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:40.583 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.584 20:53:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.584 20:53:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.584 20:53:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.584 20:53:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.584 20:53:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.584 20:53:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:40.584 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:40.584 [2024-11-20 20:53:58.406314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:40.584 [2024-11-20 20:53:58.408632] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.584 [2024-11-20 20:53:58.408830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.584 [2024-11-20 20:53:58.408934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.584 [2024-11-20 20:53:58.408982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.584 [2024-11-20 20:53:58.409009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.584 [2024-11-20 20:53:58.409041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.584 [2024-11-20 20:53:58.409075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.584 [2024-11-20 20:53:58.409098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.584 [2024-11-20 20:53:58.409186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.584 [2024-11-20 20:53:58.409221] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:40.584 [2024-11-20 20:53:58.409244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:40.584 [2024-11-20 20:53:58.409276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:40.844 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:40.844 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:40.844 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:40.844 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:40.844 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:40.844 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:40.844 20:53:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.844 20:53:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:40.845 20:53:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.845 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:40.845 20:53:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:41.105 [2024-11-20 20:53:59.006339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:41.105 [2024-11-20 20:53:59.008493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.105 [2024-11-20 20:53:59.008561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.105 [2024-11-20 20:53:59.008580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.105 [2024-11-20 20:53:59.008605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.105 [2024-11-20 20:53:59.008615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.105 [2024-11-20 20:53:59.008631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.105 [2024-11-20 20:53:59.008640] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.105 [2024-11-20 20:53:59.008652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.105 [2024-11-20 20:53:59.008662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.105 [2024-11-20 20:53:59.008675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:41.105 [2024-11-20 20:53:59.008684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:41.105 [2024-11-20 20:53:59.008695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:41.363 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:41.363 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:41.363 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:41.363 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:41.363 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:41.363 20:53:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.363 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:41.363 20:53:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:41.363 20:53:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.363 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:41.363 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:41.650 20:53:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.86 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.86 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.86 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.86 2 00:11:53.883 remove_attach_helper took 45.86s to complete (handling 2 nvme drive(s)) 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:53.883 20:54:11 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:53.883 20:54:11 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.470 20:54:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.470 20:54:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.470 20:54:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:00.470 20:54:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.470 [2024-11-20 20:54:17.901799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:00.470 [2024-11-20 20:54:17.902703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.470 [2024-11-20 20:54:17.902822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.470 [2024-11-20 20:54:17.902889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.470 [2024-11-20 20:54:17.902953] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.470 [2024-11-20 20:54:17.902977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.470 [2024-11-20 20:54:17.903025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.470 [2024-11-20 20:54:17.903080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.470 [2024-11-20 20:54:17.903099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.470 [2024-11-20 20:54:17.903264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.470 [2024-11-20 20:54:17.903291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.470 [2024-11-20 20:54:17.903328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.470 [2024-11-20 20:54:17.903373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.470 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:00.470 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.470 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.470 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.470 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.470 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.470 20:54:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.470 20:54:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.470 [2024-11-20 20:54:18.401800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:00.470 [2024-11-20 20:54:18.402675] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.470 [2024-11-20 20:54:18.402801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.470 [2024-11-20 20:54:18.402870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.470 [2024-11-20 20:54:18.402928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.470 [2024-11-20 20:54:18.402948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.470 [2024-11-20 20:54:18.403000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.470 [2024-11-20 20:54:18.403026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.470 [2024-11-20 20:54:18.403044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.471 [2024-11-20 20:54:18.403099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.471 [2024-11-20 20:54:18.403196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.471 [2024-11-20 20:54:18.403213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.471 [2024-11-20 20:54:18.403239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.471 20:54:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.471 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:00.471 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:01.043 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:01.043 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:01.043 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:01.043 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:01.043 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:01.043 20:54:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.043 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:01.043 20:54:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:01.043 20:54:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.043 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:01.043 20:54:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:01.043 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.043 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.043 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:01.043 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:01.043 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.043 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.043 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.043 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:01.304 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:01.304 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.304 20:54:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.596 20:54:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.596 20:54:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.596 20:54:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.596 20:54:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.596 20:54:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.596 20:54:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:13.596 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:13.596 [2024-11-20 20:54:31.302049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:13.596 [2024-11-20 20:54:31.303462] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.596 [2024-11-20 20:54:31.303513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.596 [2024-11-20 20:54:31.303531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.596 [2024-11-20 20:54:31.303550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.596 [2024-11-20 20:54:31.303561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.596 [2024-11-20 20:54:31.303571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.596 [2024-11-20 20:54:31.303583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.596 [2024-11-20 20:54:31.303592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.596 [2024-11-20 20:54:31.303603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.596 [2024-11-20 20:54:31.303611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.596 [2024-11-20 20:54:31.303625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.596 [2024-11-20 20:54:31.303633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:13.859 [2024-11-20 20:54:31.802068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:13.859 [2024-11-20 20:54:31.803445] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.859 [2024-11-20 20:54:31.803491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.859 [2024-11-20 20:54:31.803507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.859 [2024-11-20 20:54:31.803525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.859 [2024-11-20 20:54:31.803534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.859 [2024-11-20 20:54:31.803545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.859 [2024-11-20 20:54:31.803554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.859 [2024-11-20 20:54:31.803564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.859 [2024-11-20 20:54:31.803573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.859 [2024-11-20 20:54:31.803584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:13.859 [2024-11-20 20:54:31.803592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:13.859 [2024-11-20 20:54:31.803603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.859 20:54:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.859 20:54:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.859 20:54:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:13.859 20:54:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:14.121 20:54:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:14.121 20:54:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.121 20:54:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:14.121 20:54:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:14.121 20:54:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:14.121 20:54:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:14.121 20:54:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:14.121 20:54:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.355 20:54:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.355 20:54:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.355 20:54:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:26.355 [2024-11-20 20:54:44.202247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:26.355 [2024-11-20 20:54:44.203999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.355 [2024-11-20 20:54:44.204057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.355 [2024-11-20 20:54:44.204093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.355 [2024-11-20 20:54:44.204124] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.355 [2024-11-20 20:54:44.204146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.355 [2024-11-20 20:54:44.204170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.355 [2024-11-20 20:54:44.204194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.355 [2024-11-20 20:54:44.204212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.355 [2024-11-20 20:54:44.204239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.355 [2024-11-20 20:54:44.204262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.355 [2024-11-20 20:54:44.204279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.355 [2024-11-20 20:54:44.204301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.355 20:54:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.355 20:54:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.355 20:54:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:26.355 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:26.613 [2024-11-20 20:54:44.702243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:26.613 [2024-11-20 20:54:44.703151] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.613 [2024-11-20 20:54:44.703253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.613 [2024-11-20 20:54:44.703315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.614 [2024-11-20 20:54:44.703347] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.614 [2024-11-20 20:54:44.703364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.614 [2024-11-20 20:54:44.703389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.614 [2024-11-20 20:54:44.703413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.614 [2024-11-20 20:54:44.703434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.614 [2024-11-20 20:54:44.703493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.614 [2024-11-20 20:54:44.703520] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:26.614 [2024-11-20 20:54:44.703537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:26.614 [2024-11-20 20:54:44.703561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:26.872 20:54:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.872 20:54:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:26.872 20:54:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:26.872 20:54:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:27.130 20:54:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:27.130 20:54:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:27.130 20:54:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.30 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.30 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.30 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.30 2 00:12:39.330 remove_attach_helper took 45.30s to complete (handling 2 nvme drive(s)) 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:39.330 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 78767 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 78767 ']' 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 78767 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78767 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78767' 00:12:39.330 killing process with pid 78767 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@973 -- # kill 78767 00:12:39.330 20:54:57 sw_hotplug -- common/autotest_common.sh@978 -- # wait 78767 00:12:39.590 20:54:57 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:39.852 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:40.425 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:40.425 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:40.425 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.425 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:40.425 00:12:40.425 real 2m29.789s 00:12:40.425 user 1m51.524s 00:12:40.425 sys 0m17.008s 00:12:40.425 20:54:58 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.425 ************************************ 00:12:40.425 END TEST sw_hotplug 00:12:40.425 ************************************ 00:12:40.425 20:54:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:40.425 20:54:58 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:40.425 20:54:58 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:40.425 20:54:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:40.425 20:54:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.425 20:54:58 -- common/autotest_common.sh@10 -- # set +x 00:12:40.425 ************************************ 00:12:40.425 START TEST nvme_xnvme 00:12:40.425 ************************************ 00:12:40.425 20:54:58 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:40.689 * Looking for test storage... 00:12:40.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.689 20:54:58 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:40.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.689 --rc genhtml_branch_coverage=1 00:12:40.689 --rc genhtml_function_coverage=1 00:12:40.689 --rc genhtml_legend=1 00:12:40.689 --rc geninfo_all_blocks=1 00:12:40.689 --rc geninfo_unexecuted_blocks=1 00:12:40.689 00:12:40.689 ' 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:40.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.689 --rc genhtml_branch_coverage=1 00:12:40.689 --rc genhtml_function_coverage=1 00:12:40.689 --rc genhtml_legend=1 00:12:40.689 --rc geninfo_all_blocks=1 00:12:40.689 --rc geninfo_unexecuted_blocks=1 00:12:40.689 00:12:40.689 ' 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:40.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.689 --rc genhtml_branch_coverage=1 00:12:40.689 --rc genhtml_function_coverage=1 00:12:40.689 --rc genhtml_legend=1 00:12:40.689 --rc geninfo_all_blocks=1 00:12:40.689 --rc geninfo_unexecuted_blocks=1 00:12:40.689 00:12:40.689 ' 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:40.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.689 --rc genhtml_branch_coverage=1 00:12:40.689 --rc genhtml_function_coverage=1 00:12:40.689 --rc genhtml_legend=1 00:12:40.689 --rc geninfo_all_blocks=1 00:12:40.689 --rc geninfo_unexecuted_blocks=1 00:12:40.689 00:12:40.689 ' 00:12:40.689 20:54:58 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:40.689 20:54:58 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:40.689 20:54:58 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:40.689 20:54:58 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:40.690 20:54:58 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:40.690 20:54:58 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:40.690 #define SPDK_CONFIG_H 00:12:40.690 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:40.690 #define SPDK_CONFIG_APPS 1 00:12:40.690 #define SPDK_CONFIG_ARCH native 00:12:40.690 #define SPDK_CONFIG_ASAN 1 00:12:40.690 #undef SPDK_CONFIG_AVAHI 00:12:40.690 #undef SPDK_CONFIG_CET 00:12:40.690 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:40.690 #define SPDK_CONFIG_COVERAGE 1 00:12:40.690 #define SPDK_CONFIG_CROSS_PREFIX 00:12:40.690 #undef SPDK_CONFIG_CRYPTO 00:12:40.690 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:40.690 #undef SPDK_CONFIG_CUSTOMOCF 00:12:40.690 #undef SPDK_CONFIG_DAOS 00:12:40.690 #define SPDK_CONFIG_DAOS_DIR 00:12:40.690 #define SPDK_CONFIG_DEBUG 1 00:12:40.690 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:40.690 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:40.690 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:40.690 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:40.690 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:40.690 #undef SPDK_CONFIG_DPDK_UADK 00:12:40.690 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:40.690 #define SPDK_CONFIG_EXAMPLES 1 00:12:40.690 #undef SPDK_CONFIG_FC 00:12:40.690 #define SPDK_CONFIG_FC_PATH 00:12:40.690 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:40.690 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:40.690 #define SPDK_CONFIG_FSDEV 1 00:12:40.690 #undef SPDK_CONFIG_FUSE 00:12:40.690 #undef SPDK_CONFIG_FUZZER 00:12:40.690 #define SPDK_CONFIG_FUZZER_LIB 00:12:40.690 #undef SPDK_CONFIG_GOLANG 00:12:40.690 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:40.690 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:40.690 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:40.690 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:40.690 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:40.690 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:40.690 #undef SPDK_CONFIG_HAVE_LZ4 00:12:40.690 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:40.690 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:40.690 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:40.690 #define SPDK_CONFIG_IDXD 1 00:12:40.690 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:40.690 #undef SPDK_CONFIG_IPSEC_MB 00:12:40.690 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:40.690 #define SPDK_CONFIG_ISAL 1 00:12:40.690 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:40.690 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:40.690 #define SPDK_CONFIG_LIBDIR 00:12:40.690 #undef SPDK_CONFIG_LTO 00:12:40.690 #define SPDK_CONFIG_MAX_LCORES 128 00:12:40.690 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:40.690 #define SPDK_CONFIG_NVME_CUSE 1 00:12:40.690 #undef SPDK_CONFIG_OCF 00:12:40.690 #define SPDK_CONFIG_OCF_PATH 00:12:40.690 #define SPDK_CONFIG_OPENSSL_PATH 00:12:40.690 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:40.690 #define SPDK_CONFIG_PGO_DIR 00:12:40.690 #undef SPDK_CONFIG_PGO_USE 00:12:40.690 #define SPDK_CONFIG_PREFIX /usr/local 00:12:40.690 #undef SPDK_CONFIG_RAID5F 00:12:40.690 #undef SPDK_CONFIG_RBD 00:12:40.690 #define SPDK_CONFIG_RDMA 1 00:12:40.690 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:40.690 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:40.690 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:40.690 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:40.690 #define SPDK_CONFIG_SHARED 1 00:12:40.690 #undef SPDK_CONFIG_SMA 00:12:40.690 #define SPDK_CONFIG_TESTS 1 00:12:40.690 #undef SPDK_CONFIG_TSAN 00:12:40.690 #define SPDK_CONFIG_UBLK 1 00:12:40.690 #define SPDK_CONFIG_UBSAN 1 00:12:40.690 #undef SPDK_CONFIG_UNIT_TESTS 00:12:40.690 #undef SPDK_CONFIG_URING 00:12:40.690 #define SPDK_CONFIG_URING_PATH 00:12:40.690 #undef SPDK_CONFIG_URING_ZNS 00:12:40.690 #undef SPDK_CONFIG_USDT 00:12:40.690 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:40.690 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:40.690 #undef SPDK_CONFIG_VFIO_USER 00:12:40.690 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:40.690 #define SPDK_CONFIG_VHOST 1 00:12:40.690 #define SPDK_CONFIG_VIRTIO 1 00:12:40.690 #undef SPDK_CONFIG_VTUNE 00:12:40.690 #define SPDK_CONFIG_VTUNE_DIR 00:12:40.690 #define SPDK_CONFIG_WERROR 1 00:12:40.690 #define SPDK_CONFIG_WPDK_DIR 00:12:40.690 #define SPDK_CONFIG_XNVME 1 00:12:40.690 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:40.690 20:54:58 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:40.690 20:54:58 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.690 20:54:58 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.690 20:54:58 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.690 20:54:58 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.690 20:54:58 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.690 20:54:58 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.690 20:54:58 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.690 20:54:58 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.690 20:54:58 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:40.690 20:54:58 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.690 20:54:58 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:40.690 20:54:58 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:40.690 20:54:58 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:40.690 20:54:58 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:40.690 20:54:58 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:40.690 20:54:58 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:40.690 20:54:58 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:40.691 20:54:58 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@140 -- # : v22.11.4 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:40.691 20:54:58 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 80133 ]] 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 80133 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.KcxuwA 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:40.692 20:54:58 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.KcxuwA/tests/xnvme /tmp/spdk.KcxuwA 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13369589760 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6212976640 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:40.954 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13369589760 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6212976640 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97318752256 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=2384027648 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:40.955 * Looking for test storage... 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13369589760 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:40.955 20:54:58 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:40.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.955 --rc genhtml_branch_coverage=1 00:12:40.955 --rc genhtml_function_coverage=1 00:12:40.955 --rc genhtml_legend=1 00:12:40.955 --rc geninfo_all_blocks=1 00:12:40.955 --rc geninfo_unexecuted_blocks=1 00:12:40.955 00:12:40.955 ' 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:40.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.955 --rc genhtml_branch_coverage=1 00:12:40.955 --rc genhtml_function_coverage=1 00:12:40.955 --rc genhtml_legend=1 00:12:40.955 --rc geninfo_all_blocks=1 00:12:40.955 --rc geninfo_unexecuted_blocks=1 00:12:40.955 00:12:40.955 ' 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:40.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.955 --rc genhtml_branch_coverage=1 00:12:40.955 --rc genhtml_function_coverage=1 00:12:40.955 --rc genhtml_legend=1 00:12:40.955 --rc geninfo_all_blocks=1 00:12:40.955 --rc geninfo_unexecuted_blocks=1 00:12:40.955 00:12:40.955 ' 00:12:40.955 20:54:58 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:40.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:40.955 --rc genhtml_branch_coverage=1 00:12:40.955 --rc genhtml_function_coverage=1 00:12:40.955 --rc genhtml_legend=1 00:12:40.955 --rc geninfo_all_blocks=1 00:12:40.955 --rc geninfo_unexecuted_blocks=1 00:12:40.955 00:12:40.955 ' 00:12:40.955 20:54:58 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:40.956 20:54:58 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:40.956 20:54:58 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.956 20:54:58 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.956 20:54:58 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.956 20:54:58 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.956 20:54:58 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.956 20:54:58 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.956 20:54:58 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:40.956 20:54:58 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:40.956 20:54:58 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:41.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:41.479 Waiting for block devices as requested 00:12:41.479 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:41.479 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:41.740 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:41.740 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:47.036 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:47.037 20:55:04 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:47.298 20:55:05 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:47.298 20:55:05 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:47.560 20:55:05 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:47.560 20:55:05 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:47.560 No valid GPT data, bailing 00:12:47.560 20:55:05 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:47.560 20:55:05 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:47.560 20:55:05 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:47.560 20:55:05 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:47.561 20:55:05 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:47.561 20:55:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:47.561 20:55:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.561 20:55:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.561 ************************************ 00:12:47.561 START TEST xnvme_rpc 00:12:47.561 ************************************ 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80523 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80523 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80523 ']' 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:47.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.561 20:55:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.561 [2024-11-20 20:55:05.634539] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:12:47.561 [2024-11-20 20:55:05.634927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80523 ] 00:12:47.823 [2024-11-20 20:55:05.775145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.823 [2024-11-20 20:55:05.815273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.397 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.397 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:48.397 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:48.397 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.397 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.660 xnvme_bdev 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80523 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80523 ']' 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80523 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80523 00:12:48.660 killing process with pid 80523 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80523' 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80523 00:12:48.660 20:55:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80523 00:12:49.234 ************************************ 00:12:49.234 END TEST xnvme_rpc 00:12:49.234 ************************************ 00:12:49.234 00:12:49.234 real 0m1.622s 00:12:49.234 user 0m1.587s 00:12:49.234 sys 0m0.499s 00:12:49.234 20:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.234 20:55:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.235 20:55:07 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:49.235 20:55:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:49.235 20:55:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:49.235 20:55:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:49.235 ************************************ 00:12:49.235 START TEST xnvme_bdevperf 00:12:49.235 ************************************ 00:12:49.235 20:55:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:49.235 20:55:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:49.235 20:55:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:49.235 20:55:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:49.235 20:55:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:49.235 20:55:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:49.235 20:55:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:49.235 20:55:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:49.235 { 00:12:49.235 "subsystems": [ 00:12:49.235 { 00:12:49.235 "subsystem": "bdev", 00:12:49.235 "config": [ 00:12:49.235 { 00:12:49.235 "params": { 00:12:49.235 "io_mechanism": "libaio", 00:12:49.235 "conserve_cpu": false, 00:12:49.235 "filename": "/dev/nvme0n1", 00:12:49.235 "name": "xnvme_bdev" 00:12:49.235 }, 00:12:49.235 "method": "bdev_xnvme_create" 00:12:49.235 }, 00:12:49.235 { 00:12:49.235 "method": "bdev_wait_for_examine" 00:12:49.235 } 00:12:49.235 ] 00:12:49.235 } 00:12:49.235 ] 00:12:49.235 } 00:12:49.235 [2024-11-20 20:55:07.311338] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:12:49.235 [2024-11-20 20:55:07.311477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80580 ] 00:12:49.497 [2024-11-20 20:55:07.458233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.497 [2024-11-20 20:55:07.497677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.757 Running I/O for 5 seconds... 00:12:51.649 28341.00 IOPS, 110.71 MiB/s [2024-11-20T20:55:10.714Z] 28834.50 IOPS, 112.63 MiB/s [2024-11-20T20:55:12.103Z] 28360.67 IOPS, 110.78 MiB/s [2024-11-20T20:55:12.676Z] 28645.25 IOPS, 111.90 MiB/s [2024-11-20T20:55:12.676Z] 28517.00 IOPS, 111.39 MiB/s 00:12:54.557 Latency(us) 00:12:54.557 [2024-11-20T20:55:12.676Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.557 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:54.557 xnvme_bdev : 5.01 28500.11 111.33 0.00 0.00 2241.32 494.67 7561.85 00:12:54.557 [2024-11-20T20:55:12.676Z] =================================================================================================================== 00:12:54.557 [2024-11-20T20:55:12.676Z] Total : 28500.11 111.33 0.00 0.00 2241.32 494.67 7561.85 00:12:54.820 20:55:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:54.820 20:55:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:54.820 20:55:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:54.820 20:55:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:54.820 20:55:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:55.082 { 00:12:55.082 "subsystems": [ 00:12:55.082 { 00:12:55.082 "subsystem": "bdev", 00:12:55.082 "config": [ 00:12:55.082 { 00:12:55.082 "params": { 00:12:55.082 "io_mechanism": "libaio", 00:12:55.082 "conserve_cpu": false, 00:12:55.082 "filename": "/dev/nvme0n1", 00:12:55.082 "name": "xnvme_bdev" 00:12:55.082 }, 00:12:55.082 "method": "bdev_xnvme_create" 00:12:55.082 }, 00:12:55.082 { 00:12:55.082 "method": "bdev_wait_for_examine" 00:12:55.082 } 00:12:55.082 ] 00:12:55.082 } 00:12:55.082 ] 00:12:55.082 } 00:12:55.082 [2024-11-20 20:55:13.002601] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:12:55.082 [2024-11-20 20:55:13.002770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80650 ] 00:12:55.082 [2024-11-20 20:55:13.150401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.082 [2024-11-20 20:55:13.189638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.351 Running I/O for 5 seconds... 00:12:57.241 7050.00 IOPS, 27.54 MiB/s [2024-11-20T20:55:16.746Z] 13819.50 IOPS, 53.98 MiB/s [2024-11-20T20:55:17.690Z] 20782.33 IOPS, 81.18 MiB/s [2024-11-20T20:55:18.636Z] 23893.25 IOPS, 93.33 MiB/s [2024-11-20T20:55:18.636Z] 25993.80 IOPS, 101.54 MiB/s 00:13:00.517 Latency(us) 00:13:00.517 [2024-11-20T20:55:18.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.517 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:00.517 xnvme_bdev : 5.01 25970.53 101.45 0.00 0.00 2460.51 83.50 30449.03 00:13:00.517 [2024-11-20T20:55:18.636Z] =================================================================================================================== 00:13:00.517 [2024-11-20T20:55:18.636Z] Total : 25970.53 101.45 0.00 0.00 2460.51 83.50 30449.03 00:13:00.517 00:13:00.517 real 0m11.377s 00:13:00.517 user 0m4.432s 00:13:00.517 sys 0m5.491s 00:13:00.517 ************************************ 00:13:00.517 END TEST xnvme_bdevperf 00:13:00.517 ************************************ 00:13:00.517 20:55:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.517 20:55:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:00.778 20:55:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:00.778 20:55:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:00.778 20:55:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.778 20:55:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.778 ************************************ 00:13:00.778 START TEST xnvme_fio_plugin 00:13:00.778 ************************************ 00:13:00.778 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:00.778 20:55:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:00.778 20:55:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:00.778 20:55:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:00.778 20:55:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:00.778 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:00.779 20:55:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:00.779 { 00:13:00.779 "subsystems": [ 00:13:00.779 { 00:13:00.779 "subsystem": "bdev", 00:13:00.779 "config": [ 00:13:00.779 { 00:13:00.779 "params": { 00:13:00.779 "io_mechanism": "libaio", 00:13:00.779 "conserve_cpu": false, 00:13:00.779 "filename": "/dev/nvme0n1", 00:13:00.779 "name": "xnvme_bdev" 00:13:00.779 }, 00:13:00.779 "method": "bdev_xnvme_create" 00:13:00.779 }, 00:13:00.779 { 00:13:00.779 "method": "bdev_wait_for_examine" 00:13:00.779 } 00:13:00.779 ] 00:13:00.779 } 00:13:00.779 ] 00:13:00.779 } 00:13:00.779 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:00.779 fio-3.35 00:13:00.779 Starting 1 thread 00:13:07.445 00:13:07.445 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80758: Wed Nov 20 20:55:24 2024 00:13:07.445 read: IOPS=36.0k, BW=141MiB/s (148MB/s)(704MiB/5001msec) 00:13:07.445 slat (usec): min=4, max=1814, avg=17.75, stdev=84.18 00:13:07.445 clat (usec): min=72, max=15946, avg=1328.73, stdev=646.25 00:13:07.445 lat (usec): min=76, max=15951, avg=1346.48, stdev=641.55 00:13:07.445 clat percentiles (usec): 00:13:07.445 | 1.00th=[ 289], 5.00th=[ 529], 10.00th=[ 685], 20.00th=[ 873], 00:13:07.445 | 30.00th=[ 1012], 40.00th=[ 1139], 50.00th=[ 1254], 60.00th=[ 1385], 00:13:07.445 | 70.00th=[ 1516], 80.00th=[ 1696], 90.00th=[ 1975], 95.00th=[ 2343], 00:13:07.445 | 99.00th=[ 3294], 99.50th=[ 4015], 99.90th=[ 7373], 99.95th=[ 8717], 00:13:07.445 | 99.99th=[11338] 00:13:07.445 bw ( KiB/s): min=134179, max=151648, per=99.53%, avg=143468.67, stdev=5646.88, samples=9 00:13:07.445 iops : min=33544, max=37912, avg=35867.00, stdev=1411.93, samples=9 00:13:07.445 lat (usec) : 100=0.01%, 250=0.61%, 500=3.81%, 750=8.43%, 1000=16.34% 00:13:07.445 lat (msec) : 2=61.35%, 4=8.95%, 10=0.48%, 20=0.03% 00:13:07.445 cpu : usr=46.98%, sys=43.12%, ctx=28, majf=0, minf=1065 00:13:07.445 IO depths : 1=0.3%, 2=0.8%, 4=2.3%, 8=7.2%, 16=23.0%, 32=64.0%, >=64=2.4% 00:13:07.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:07.445 complete : 0=0.0%, 4=97.8%, 8=0.1%, 16=0.2%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:07.445 issued rwts: total=180223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:07.445 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:07.445 00:13:07.445 Run status group 0 (all jobs): 00:13:07.445 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=704MiB (738MB), run=5001-5001msec 00:13:07.445 ----------------------------------------------------- 00:13:07.445 Suppressions used: 00:13:07.445 count bytes template 00:13:07.445 1 11 /usr/src/fio/parse.c 00:13:07.445 1 8 libtcmalloc_minimal.so 00:13:07.445 1 904 libcrypto.so 00:13:07.445 ----------------------------------------------------- 00:13:07.445 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:07.445 20:55:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:07.445 { 00:13:07.445 "subsystems": [ 00:13:07.445 { 00:13:07.445 "subsystem": "bdev", 00:13:07.445 "config": [ 00:13:07.445 { 00:13:07.445 "params": { 00:13:07.445 "io_mechanism": "libaio", 00:13:07.445 "conserve_cpu": false, 00:13:07.445 "filename": "/dev/nvme0n1", 00:13:07.445 "name": "xnvme_bdev" 00:13:07.445 }, 00:13:07.445 "method": "bdev_xnvme_create" 00:13:07.445 }, 00:13:07.445 { 00:13:07.445 "method": "bdev_wait_for_examine" 00:13:07.445 } 00:13:07.445 ] 00:13:07.445 } 00:13:07.445 ] 00:13:07.445 } 00:13:07.445 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:07.445 fio-3.35 00:13:07.445 Starting 1 thread 00:13:12.734 00:13:12.734 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=80839: Wed Nov 20 20:55:30 2024 00:13:12.734 write: IOPS=17.3k, BW=67.7MiB/s (71.0MB/s)(339MiB/5009msec); 0 zone resets 00:13:12.734 slat (usec): min=4, max=1304, avg=13.82, stdev=49.32 00:13:12.734 clat (usec): min=10, max=29368, avg=3546.32, stdev=4658.43 00:13:12.734 lat (usec): min=61, max=29373, avg=3560.14, stdev=4657.30 00:13:12.734 clat percentiles (usec): 00:13:12.734 | 1.00th=[ 123], 5.00th=[ 269], 10.00th=[ 371], 20.00th=[ 562], 00:13:12.734 | 30.00th=[ 660], 40.00th=[ 734], 50.00th=[ 816], 60.00th=[ 988], 00:13:12.734 | 70.00th=[ 1696], 80.00th=[ 9372], 90.00th=[11469], 95.00th=[12518], 00:13:12.734 | 99.00th=[14746], 99.50th=[15926], 99.90th=[22676], 99.95th=[24511], 00:13:12.734 | 99.99th=[27657] 00:13:12.734 bw ( KiB/s): min=49576, max=75552, per=100.00%, avg=69450.40, stdev=7443.15, samples=10 00:13:12.734 iops : min=12394, max=18888, avg=17362.60, stdev=1860.79, samples=10 00:13:12.734 lat (usec) : 20=0.03%, 50=0.11%, 100=0.38%, 250=3.88%, 500=12.39% 00:13:12.734 lat (usec) : 750=25.76%, 1000=18.07% 00:13:12.734 lat (msec) : 2=9.96%, 4=0.78%, 10=11.24%, 20=17.24%, 50=0.17% 00:13:12.734 cpu : usr=74.64%, sys=13.96%, ctx=14, majf=0, minf=1065 00:13:12.734 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=1.4%, 32=87.1%, >=64=11.4% 00:13:12.734 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:12.734 complete : 0=0.0%, 4=94.1%, 8=2.0%, 16=2.3%, 32=1.4%, 64=0.2%, >=64=0.0% 00:13:12.734 issued rwts: total=0,86867,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:12.734 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:12.734 00:13:12.734 Run status group 0 (all jobs): 00:13:12.734 WRITE: bw=67.7MiB/s (71.0MB/s), 67.7MiB/s-67.7MiB/s (71.0MB/s-71.0MB/s), io=339MiB (356MB), run=5009-5009msec 00:13:12.996 ----------------------------------------------------- 00:13:12.996 Suppressions used: 00:13:12.996 count bytes template 00:13:12.996 1 11 /usr/src/fio/parse.c 00:13:12.996 1 8 libtcmalloc_minimal.so 00:13:12.996 1 904 libcrypto.so 00:13:12.996 ----------------------------------------------------- 00:13:12.996 00:13:12.996 00:13:12.996 real 0m12.270s 00:13:12.996 user 0m7.346s 00:13:12.996 sys 0m3.485s 00:13:12.996 20:55:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.996 20:55:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:12.996 ************************************ 00:13:12.996 END TEST xnvme_fio_plugin 00:13:12.996 ************************************ 00:13:12.996 20:55:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:12.996 20:55:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:12.996 20:55:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:12.996 20:55:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:12.996 20:55:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:12.996 20:55:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.996 20:55:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.996 ************************************ 00:13:12.996 START TEST xnvme_rpc 00:13:12.996 ************************************ 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:12.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80925 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80925 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80925 ']' 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.996 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.996 [2024-11-20 20:55:31.110503] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:12.996 [2024-11-20 20:55:31.110976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80925 ] 00:13:13.257 [2024-11-20 20:55:31.258805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.257 [2024-11-20 20:55:31.300232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.203 xnvme_bdev 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.203 20:55:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80925 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80925 ']' 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80925 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80925 00:13:14.203 killing process with pid 80925 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80925' 00:13:14.203 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80925 00:13:14.204 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80925 00:13:14.778 ************************************ 00:13:14.778 END TEST xnvme_rpc 00:13:14.779 ************************************ 00:13:14.779 00:13:14.779 real 0m1.650s 00:13:14.779 user 0m1.605s 00:13:14.779 sys 0m0.505s 00:13:14.779 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.779 20:55:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:14.779 20:55:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:14.779 20:55:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.779 20:55:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.779 20:55:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:14.779 ************************************ 00:13:14.779 START TEST xnvme_bdevperf 00:13:14.779 ************************************ 00:13:14.779 20:55:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:14.779 20:55:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:14.779 20:55:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:14.779 20:55:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:14.779 20:55:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:14.779 20:55:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:14.779 20:55:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:14.779 20:55:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:14.779 { 00:13:14.779 "subsystems": [ 00:13:14.779 { 00:13:14.779 "subsystem": "bdev", 00:13:14.779 "config": [ 00:13:14.779 { 00:13:14.779 "params": { 00:13:14.779 "io_mechanism": "libaio", 00:13:14.779 "conserve_cpu": true, 00:13:14.779 "filename": "/dev/nvme0n1", 00:13:14.779 "name": "xnvme_bdev" 00:13:14.779 }, 00:13:14.779 "method": "bdev_xnvme_create" 00:13:14.779 }, 00:13:14.779 { 00:13:14.779 "method": "bdev_wait_for_examine" 00:13:14.779 } 00:13:14.779 ] 00:13:14.779 } 00:13:14.779 ] 00:13:14.779 } 00:13:14.779 [2024-11-20 20:55:32.808727] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:14.779 [2024-11-20 20:55:32.808881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80977 ] 00:13:15.040 [2024-11-20 20:55:32.954926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.040 [2024-11-20 20:55:32.997029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.040 Running I/O for 5 seconds... 00:13:17.374 31378.00 IOPS, 122.57 MiB/s [2024-11-20T20:55:36.438Z] 30953.00 IOPS, 120.91 MiB/s [2024-11-20T20:55:37.383Z] 30264.33 IOPS, 118.22 MiB/s [2024-11-20T20:55:38.327Z] 29977.25 IOPS, 117.10 MiB/s [2024-11-20T20:55:38.327Z] 30442.80 IOPS, 118.92 MiB/s 00:13:20.208 Latency(us) 00:13:20.208 [2024-11-20T20:55:38.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.208 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:20.208 xnvme_bdev : 5.01 30410.45 118.79 0.00 0.00 2098.61 392.27 17039.36 00:13:20.208 [2024-11-20T20:55:38.327Z] =================================================================================================================== 00:13:20.208 [2024-11-20T20:55:38.327Z] Total : 30410.45 118.79 0.00 0.00 2098.61 392.27 17039.36 00:13:20.469 20:55:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:20.469 20:55:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:20.469 20:55:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:20.469 20:55:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:20.469 20:55:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:20.469 { 00:13:20.469 "subsystems": [ 00:13:20.469 { 00:13:20.469 "subsystem": "bdev", 00:13:20.469 "config": [ 00:13:20.469 { 00:13:20.469 "params": { 00:13:20.470 "io_mechanism": "libaio", 00:13:20.470 "conserve_cpu": true, 00:13:20.470 "filename": "/dev/nvme0n1", 00:13:20.470 "name": "xnvme_bdev" 00:13:20.470 }, 00:13:20.470 "method": "bdev_xnvme_create" 00:13:20.470 }, 00:13:20.470 { 00:13:20.470 "method": "bdev_wait_for_examine" 00:13:20.470 } 00:13:20.470 ] 00:13:20.470 } 00:13:20.470 ] 00:13:20.470 } 00:13:20.470 [2024-11-20 20:55:38.491965] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:20.470 [2024-11-20 20:55:38.492363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81047 ] 00:13:20.732 [2024-11-20 20:55:38.639354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.732 [2024-11-20 20:55:38.681096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.732 Running I/O for 5 seconds... 00:13:23.057 5288.00 IOPS, 20.66 MiB/s [2024-11-20T20:55:42.120Z] 5431.50 IOPS, 21.22 MiB/s [2024-11-20T20:55:43.063Z] 5555.33 IOPS, 21.70 MiB/s [2024-11-20T20:55:44.011Z] 5527.25 IOPS, 21.59 MiB/s [2024-11-20T20:55:44.011Z] 5546.40 IOPS, 21.67 MiB/s 00:13:25.892 Latency(us) 00:13:25.892 [2024-11-20T20:55:44.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.892 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:25.892 xnvme_bdev : 5.01 5546.01 21.66 0.00 0.00 11519.66 56.32 39926.55 00:13:25.892 [2024-11-20T20:55:44.011Z] =================================================================================================================== 00:13:25.892 [2024-11-20T20:55:44.011Z] Total : 5546.01 21.66 0.00 0.00 11519.66 56.32 39926.55 00:13:26.154 00:13:26.154 real 0m11.391s 00:13:26.154 user 0m6.405s 00:13:26.154 sys 0m3.772s 00:13:26.154 ************************************ 00:13:26.154 END TEST xnvme_bdevperf 00:13:26.154 ************************************ 00:13:26.154 20:55:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.154 20:55:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:26.154 20:55:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:26.154 20:55:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:26.154 20:55:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.154 20:55:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.154 ************************************ 00:13:26.154 START TEST xnvme_fio_plugin 00:13:26.154 ************************************ 00:13:26.154 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:26.154 20:55:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:26.154 20:55:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:26.154 20:55:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:26.154 20:55:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:26.155 20:55:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:26.155 { 00:13:26.155 "subsystems": [ 00:13:26.155 { 00:13:26.155 "subsystem": "bdev", 00:13:26.155 "config": [ 00:13:26.155 { 00:13:26.155 "params": { 00:13:26.155 "io_mechanism": "libaio", 00:13:26.155 "conserve_cpu": true, 00:13:26.155 "filename": "/dev/nvme0n1", 00:13:26.155 "name": "xnvme_bdev" 00:13:26.155 }, 00:13:26.155 "method": "bdev_xnvme_create" 00:13:26.155 }, 00:13:26.155 { 00:13:26.155 "method": "bdev_wait_for_examine" 00:13:26.155 } 00:13:26.155 ] 00:13:26.155 } 00:13:26.155 ] 00:13:26.155 } 00:13:26.415 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:26.415 fio-3.35 00:13:26.415 Starting 1 thread 00:13:33.006 00:13:33.007 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81156: Wed Nov 20 20:55:49 2024 00:13:33.007 read: IOPS=33.2k, BW=130MiB/s (136MB/s)(648MiB/5001msec) 00:13:33.007 slat (usec): min=4, max=5661, avg=23.54, stdev=95.24 00:13:33.007 clat (usec): min=108, max=6761, avg=1299.25, stdev=532.02 00:13:33.007 lat (usec): min=204, max=6842, avg=1322.79, stdev=523.52 00:13:33.007 clat percentiles (usec): 00:13:33.007 | 1.00th=[ 273], 5.00th=[ 494], 10.00th=[ 652], 20.00th=[ 865], 00:13:33.007 | 30.00th=[ 1020], 40.00th=[ 1156], 50.00th=[ 1270], 60.00th=[ 1385], 00:13:33.007 | 70.00th=[ 1516], 80.00th=[ 1680], 90.00th=[ 1942], 95.00th=[ 2180], 00:13:33.007 | 99.00th=[ 2868], 99.50th=[ 3195], 99.90th=[ 3785], 99.95th=[ 4113], 00:13:33.007 | 99.99th=[ 6652] 00:13:33.007 bw ( KiB/s): min=117728, max=140640, per=99.81%, avg=132359.56, stdev=6596.91, samples=9 00:13:33.007 iops : min=29432, max=35160, avg=33089.89, stdev=1649.23, samples=9 00:13:33.007 lat (usec) : 250=0.74%, 500=4.44%, 750=8.94%, 1000=14.28% 00:13:33.007 lat (msec) : 2=63.18%, 4=8.35%, 10=0.07% 00:13:33.007 cpu : usr=35.52%, sys=55.18%, ctx=11, majf=0, minf=1065 00:13:33.007 IO depths : 1=0.4%, 2=1.0%, 4=2.9%, 8=8.3%, 16=23.6%, 32=61.9%, >=64=2.1% 00:13:33.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:33.007 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:33.007 issued rwts: total=165798,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:33.007 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:33.007 00:13:33.007 Run status group 0 (all jobs): 00:13:33.007 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=648MiB (679MB), run=5001-5001msec 00:13:33.007 ----------------------------------------------------- 00:13:33.007 Suppressions used: 00:13:33.007 count bytes template 00:13:33.007 1 11 /usr/src/fio/parse.c 00:13:33.007 1 8 libtcmalloc_minimal.so 00:13:33.007 1 904 libcrypto.so 00:13:33.007 ----------------------------------------------------- 00:13:33.007 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:33.007 { 00:13:33.007 "subsystems": [ 00:13:33.007 { 00:13:33.007 "subsystem": "bdev", 00:13:33.007 "config": [ 00:13:33.007 { 00:13:33.007 "params": { 00:13:33.007 "io_mechanism": "libaio", 00:13:33.007 "conserve_cpu": true, 00:13:33.007 "filename": "/dev/nvme0n1", 00:13:33.007 "name": "xnvme_bdev" 00:13:33.007 }, 00:13:33.007 "method": "bdev_xnvme_create" 00:13:33.007 }, 00:13:33.007 { 00:13:33.007 "method": "bdev_wait_for_examine" 00:13:33.007 } 00:13:33.007 ] 00:13:33.007 } 00:13:33.007 ] 00:13:33.007 } 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:33.007 20:55:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:33.007 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:33.007 fio-3.35 00:13:33.007 Starting 1 thread 00:13:38.303 00:13:38.303 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81242: Wed Nov 20 20:55:55 2024 00:13:38.303 write: IOPS=24.0k, BW=93.9MiB/s (98.5MB/s)(471MiB/5010msec); 0 zone resets 00:13:38.303 slat (usec): min=4, max=1486, avg=21.88, stdev=82.23 00:13:38.303 clat (usec): min=10, max=19732, avg=2153.44, stdev=3218.58 00:13:38.303 lat (usec): min=66, max=19736, avg=2175.32, stdev=3215.02 00:13:38.303 clat percentiles (usec): 00:13:38.303 | 1.00th=[ 155], 5.00th=[ 338], 10.00th=[ 486], 20.00th=[ 668], 00:13:38.303 | 30.00th=[ 824], 40.00th=[ 1004], 50.00th=[ 1172], 60.00th=[ 1319], 00:13:38.303 | 70.00th=[ 1516], 80.00th=[ 1795], 90.00th=[ 3261], 95.00th=[11600], 00:13:38.303 | 99.00th=[13960], 99.50th=[14615], 99.90th=[16057], 99.95th=[16909], 00:13:38.303 | 99.99th=[19006] 00:13:38.303 bw ( KiB/s): min=52600, max=140368, per=100.00%, avg=96328.80, stdev=38101.75, samples=10 00:13:38.303 iops : min=13150, max=35092, avg=24082.20, stdev=9525.44, samples=10 00:13:38.303 lat (usec) : 20=0.01%, 50=0.04%, 100=0.18%, 250=2.40%, 500=7.85% 00:13:38.303 lat (usec) : 750=14.95%, 1000=14.54% 00:13:38.303 lat (msec) : 2=43.90%, 4=6.47%, 10=1.71%, 20=7.95% 00:13:38.303 cpu : usr=53.28%, sys=36.04%, ctx=19, majf=0, minf=1065 00:13:38.303 IO depths : 1=0.2%, 2=0.6%, 4=1.8%, 8=6.0%, 16=17.7%, 32=67.6%, >=64=6.1% 00:13:38.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:38.303 complete : 0=0.0%, 4=96.7%, 8=0.9%, 16=0.8%, 32=0.4%, 64=1.2%, >=64=0.0% 00:13:38.303 issued rwts: total=0,120473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:38.303 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:38.303 00:13:38.303 Run status group 0 (all jobs): 00:13:38.303 WRITE: bw=93.9MiB/s (98.5MB/s), 93.9MiB/s-93.9MiB/s (98.5MB/s-98.5MB/s), io=471MiB (493MB), run=5010-5010msec 00:13:38.303 ----------------------------------------------------- 00:13:38.303 Suppressions used: 00:13:38.303 count bytes template 00:13:38.303 1 11 /usr/src/fio/parse.c 00:13:38.303 1 8 libtcmalloc_minimal.so 00:13:38.303 1 904 libcrypto.so 00:13:38.303 ----------------------------------------------------- 00:13:38.303 00:13:38.303 00:13:38.303 real 0m12.187s 00:13:38.303 user 0m5.633s 00:13:38.303 sys 0m5.170s 00:13:38.303 ************************************ 00:13:38.303 END TEST xnvme_fio_plugin 00:13:38.303 ************************************ 00:13:38.303 20:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:38.303 20:55:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:38.563 20:55:56 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:38.564 20:55:56 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:38.564 20:55:56 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:38.564 20:55:56 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:38.564 20:55:56 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:38.564 20:55:56 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:38.564 20:55:56 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:38.564 20:55:56 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:38.564 20:55:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:38.564 20:55:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:38.564 20:55:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:38.564 20:55:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.564 ************************************ 00:13:38.564 START TEST xnvme_rpc 00:13:38.564 ************************************ 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81317 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81317 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81317 ']' 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:38.564 20:55:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:38.564 [2024-11-20 20:55:56.564322] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:38.564 [2024-11-20 20:55:56.564800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81317 ] 00:13:38.826 [2024-11-20 20:55:56.713949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.826 [2024-11-20 20:55:56.754259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.397 xnvme_bdev 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:39.397 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81317 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81317 ']' 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81317 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81317 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:39.659 killing process with pid 81317 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81317' 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81317 00:13:39.659 20:55:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81317 00:13:40.232 00:13:40.232 real 0m1.614s 00:13:40.232 user 0m1.589s 00:13:40.232 sys 0m0.502s 00:13:40.232 ************************************ 00:13:40.232 END TEST xnvme_rpc 00:13:40.232 20:55:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.232 20:55:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.232 ************************************ 00:13:40.232 20:55:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:40.232 20:55:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.232 20:55:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.232 20:55:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.232 ************************************ 00:13:40.232 START TEST xnvme_bdevperf 00:13:40.232 ************************************ 00:13:40.232 20:55:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:40.232 20:55:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:40.232 20:55:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:40.232 20:55:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:40.232 20:55:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:40.232 20:55:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:40.232 20:55:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:40.232 20:55:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:40.232 { 00:13:40.232 "subsystems": [ 00:13:40.232 { 00:13:40.232 "subsystem": "bdev", 00:13:40.233 "config": [ 00:13:40.233 { 00:13:40.233 "params": { 00:13:40.233 "io_mechanism": "io_uring", 00:13:40.233 "conserve_cpu": false, 00:13:40.233 "filename": "/dev/nvme0n1", 00:13:40.233 "name": "xnvme_bdev" 00:13:40.233 }, 00:13:40.233 "method": "bdev_xnvme_create" 00:13:40.233 }, 00:13:40.233 { 00:13:40.233 "method": "bdev_wait_for_examine" 00:13:40.233 } 00:13:40.233 ] 00:13:40.233 } 00:13:40.233 ] 00:13:40.233 } 00:13:40.233 [2024-11-20 20:55:58.225145] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:40.233 [2024-11-20 20:55:58.225280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81376 ] 00:13:40.493 [2024-11-20 20:55:58.373111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.493 [2024-11-20 20:55:58.412294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.493 Running I/O for 5 seconds... 00:13:42.451 34638.00 IOPS, 135.30 MiB/s [2024-11-20T20:56:01.954Z] 34347.00 IOPS, 134.17 MiB/s [2024-11-20T20:56:02.895Z] 34609.33 IOPS, 135.19 MiB/s [2024-11-20T20:56:03.838Z] 34642.75 IOPS, 135.32 MiB/s [2024-11-20T20:56:03.838Z] 34626.00 IOPS, 135.26 MiB/s 00:13:45.719 Latency(us) 00:13:45.719 [2024-11-20T20:56:03.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.719 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:45.719 xnvme_bdev : 5.01 34597.21 135.15 0.00 0.00 1845.78 305.62 10838.65 00:13:45.719 [2024-11-20T20:56:03.838Z] =================================================================================================================== 00:13:45.719 [2024-11-20T20:56:03.838Z] Total : 34597.21 135.15 0.00 0.00 1845.78 305.62 10838.65 00:13:45.719 20:56:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:45.719 20:56:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:45.719 20:56:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:45.719 20:56:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:45.719 20:56:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:45.981 { 00:13:45.981 "subsystems": [ 00:13:45.981 { 00:13:45.981 "subsystem": "bdev", 00:13:45.981 "config": [ 00:13:45.981 { 00:13:45.981 "params": { 00:13:45.981 "io_mechanism": "io_uring", 00:13:45.981 "conserve_cpu": false, 00:13:45.981 "filename": "/dev/nvme0n1", 00:13:45.981 "name": "xnvme_bdev" 00:13:45.981 }, 00:13:45.981 "method": "bdev_xnvme_create" 00:13:45.981 }, 00:13:45.981 { 00:13:45.981 "method": "bdev_wait_for_examine" 00:13:45.981 } 00:13:45.981 ] 00:13:45.981 } 00:13:45.981 ] 00:13:45.981 } 00:13:45.981 [2024-11-20 20:56:03.879240] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:13:45.981 [2024-11-20 20:56:03.879395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81450 ] 00:13:45.981 [2024-11-20 20:56:04.028078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.981 [2024-11-20 20:56:04.069144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.242 Running I/O for 5 seconds... 00:13:48.130 7705.00 IOPS, 30.10 MiB/s [2024-11-20T20:56:07.635Z] 7613.50 IOPS, 29.74 MiB/s [2024-11-20T20:56:08.222Z] 7712.00 IOPS, 30.12 MiB/s [2024-11-20T20:56:09.608Z] 7737.50 IOPS, 30.22 MiB/s [2024-11-20T20:56:09.608Z] 7767.60 IOPS, 30.34 MiB/s 00:13:51.490 Latency(us) 00:13:51.490 [2024-11-20T20:56:09.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.490 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:51.490 xnvme_bdev : 5.02 7760.17 30.31 0.00 0.00 8234.03 76.41 26012.75 00:13:51.490 [2024-11-20T20:56:09.609Z] =================================================================================================================== 00:13:51.490 [2024-11-20T20:56:09.609Z] Total : 7760.17 30.31 0.00 0.00 8234.03 76.41 26012.75 00:13:51.490 00:13:51.490 real 0m11.327s 00:13:51.490 user 0m4.432s 00:13:51.490 sys 0m6.634s 00:13:51.490 20:56:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.490 20:56:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:51.490 ************************************ 00:13:51.490 END TEST xnvme_bdevperf 00:13:51.490 ************************************ 00:13:51.490 20:56:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:51.490 20:56:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:51.490 20:56:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.490 20:56:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.490 ************************************ 00:13:51.490 START TEST xnvme_fio_plugin 00:13:51.490 ************************************ 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:51.490 20:56:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:51.490 { 00:13:51.490 "subsystems": [ 00:13:51.490 { 00:13:51.490 "subsystem": "bdev", 00:13:51.490 "config": [ 00:13:51.490 { 00:13:51.490 "params": { 00:13:51.490 "io_mechanism": "io_uring", 00:13:51.490 "conserve_cpu": false, 00:13:51.490 "filename": "/dev/nvme0n1", 00:13:51.490 "name": "xnvme_bdev" 00:13:51.490 }, 00:13:51.490 "method": "bdev_xnvme_create" 00:13:51.490 }, 00:13:51.490 { 00:13:51.490 "method": "bdev_wait_for_examine" 00:13:51.490 } 00:13:51.490 ] 00:13:51.490 } 00:13:51.490 ] 00:13:51.490 } 00:13:51.749 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:51.749 fio-3.35 00:13:51.749 Starting 1 thread 00:13:58.311 00:13:58.311 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81553: Wed Nov 20 20:56:15 2024 00:13:58.311 read: IOPS=53.3k, BW=208MiB/s (218MB/s)(1041MiB/5001msec) 00:13:58.311 slat (nsec): min=2751, max=87707, avg=3314.18, stdev=1155.23 00:13:58.311 clat (usec): min=55, max=9880, avg=1076.55, stdev=342.82 00:13:58.311 lat (usec): min=58, max=9883, avg=1079.86, stdev=342.77 00:13:58.311 clat percentiles (usec): 00:13:58.311 | 1.00th=[ 553], 5.00th=[ 701], 10.00th=[ 734], 20.00th=[ 783], 00:13:58.311 | 30.00th=[ 832], 40.00th=[ 881], 50.00th=[ 938], 60.00th=[ 1123], 00:13:58.311 | 70.00th=[ 1336], 80.00th=[ 1418], 90.00th=[ 1516], 95.00th=[ 1598], 00:13:58.311 | 99.00th=[ 1860], 99.50th=[ 2008], 99.90th=[ 2704], 99.95th=[ 3326], 00:13:58.311 | 99.99th=[ 6456] 00:13:58.311 bw ( KiB/s): min=163840, max=263144, per=98.99%, avg=211008.00, stdev=45988.63, samples=9 00:13:58.311 iops : min=40960, max=65786, avg=52752.00, stdev=11497.16, samples=9 00:13:58.311 lat (usec) : 100=0.01%, 250=0.06%, 500=0.48%, 750=13.04%, 1000=40.48% 00:13:58.311 lat (msec) : 2=45.41%, 4=0.48%, 10=0.03% 00:13:58.311 cpu : usr=37.20%, sys=61.98%, ctx=16, majf=0, minf=1063 00:13:58.311 IO depths : 1=1.2%, 2=2.6%, 4=5.6%, 8=12.0%, 16=24.7%, 32=52.3%, >=64=1.7% 00:13:58.311 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.311 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:13:58.311 issued rwts: total=266496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.311 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:58.311 00:13:58.311 Run status group 0 (all jobs): 00:13:58.311 READ: bw=208MiB/s (218MB/s), 208MiB/s-208MiB/s (218MB/s-218MB/s), io=1041MiB (1092MB), run=5001-5001msec 00:13:58.311 ----------------------------------------------------- 00:13:58.311 Suppressions used: 00:13:58.311 count bytes template 00:13:58.311 1 11 /usr/src/fio/parse.c 00:13:58.311 1 8 libtcmalloc_minimal.so 00:13:58.311 1 904 libcrypto.so 00:13:58.311 ----------------------------------------------------- 00:13:58.311 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:58.311 20:56:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:58.311 { 00:13:58.311 "subsystems": [ 00:13:58.311 { 00:13:58.311 "subsystem": "bdev", 00:13:58.311 "config": [ 00:13:58.311 { 00:13:58.311 "params": { 00:13:58.311 "io_mechanism": "io_uring", 00:13:58.311 "conserve_cpu": false, 00:13:58.311 "filename": "/dev/nvme0n1", 00:13:58.311 "name": "xnvme_bdev" 00:13:58.311 }, 00:13:58.311 "method": "bdev_xnvme_create" 00:13:58.311 }, 00:13:58.311 { 00:13:58.311 "method": "bdev_wait_for_examine" 00:13:58.311 } 00:13:58.311 ] 00:13:58.311 } 00:13:58.311 ] 00:13:58.311 } 00:13:58.311 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:58.311 fio-3.35 00:13:58.311 Starting 1 thread 00:14:03.586 00:14:03.586 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81633: Wed Nov 20 20:56:21 2024 00:14:03.586 write: IOPS=42.4k, BW=166MiB/s (174MB/s)(828MiB/5001msec); 0 zone resets 00:14:03.586 slat (nsec): min=2807, max=86592, avg=3870.69, stdev=1858.63 00:14:03.586 clat (usec): min=481, max=6959, avg=1356.16, stdev=339.01 00:14:03.586 lat (usec): min=484, max=6962, avg=1360.03, stdev=339.27 00:14:03.586 clat percentiles (usec): 00:14:03.586 | 1.00th=[ 725], 5.00th=[ 816], 10.00th=[ 898], 20.00th=[ 1037], 00:14:03.586 | 30.00th=[ 1156], 40.00th=[ 1270], 50.00th=[ 1385], 60.00th=[ 1450], 00:14:03.586 | 70.00th=[ 1532], 80.00th=[ 1631], 90.00th=[ 1762], 95.00th=[ 1876], 00:14:03.586 | 99.00th=[ 2212], 99.50th=[ 2376], 99.90th=[ 2802], 99.95th=[ 2999], 00:14:03.586 | 99.99th=[ 3621] 00:14:03.586 bw ( KiB/s): min=141936, max=199168, per=97.49%, avg=165253.33, stdev=20055.20, samples=9 00:14:03.586 iops : min=35484, max=49792, avg=41313.33, stdev=5013.80, samples=9 00:14:03.586 lat (usec) : 500=0.01%, 750=1.88%, 1000=15.56% 00:14:03.586 lat (msec) : 2=79.81%, 4=2.73%, 10=0.01% 00:14:03.586 cpu : usr=34.30%, sys=64.52%, ctx=8, majf=0, minf=1063 00:14:03.586 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.2%, >=64=1.6% 00:14:03.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:03.586 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:03.586 issued rwts: total=0,211926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:03.586 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:03.586 00:14:03.586 Run status group 0 (all jobs): 00:14:03.586 WRITE: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=828MiB (868MB), run=5001-5001msec 00:14:03.586 ----------------------------------------------------- 00:14:03.586 Suppressions used: 00:14:03.586 count bytes template 00:14:03.586 1 11 /usr/src/fio/parse.c 00:14:03.586 1 8 libtcmalloc_minimal.so 00:14:03.586 1 904 libcrypto.so 00:14:03.586 ----------------------------------------------------- 00:14:03.586 00:14:03.586 00:14:03.586 real 0m11.905s 00:14:03.586 user 0m4.654s 00:14:03.586 sys 0m6.829s 00:14:03.586 20:56:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.586 ************************************ 00:14:03.586 20:56:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:03.586 END TEST xnvme_fio_plugin 00:14:03.586 ************************************ 00:14:03.586 20:56:21 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:03.587 20:56:21 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:03.587 20:56:21 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:03.587 20:56:21 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:03.587 20:56:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:03.587 20:56:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.587 20:56:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:03.587 ************************************ 00:14:03.587 START TEST xnvme_rpc 00:14:03.587 ************************************ 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81714 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81714 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81714 ']' 00:14:03.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.587 20:56:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:03.587 [2024-11-20 20:56:21.588211] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:03.587 [2024-11-20 20:56:21.588331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81714 ] 00:14:03.845 [2024-11-20 20:56:21.731404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.845 [2024-11-20 20:56:21.756097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.412 xnvme_bdev 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.412 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:04.413 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81714 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81714 ']' 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81714 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81714 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.672 killing process with pid 81714 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81714' 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81714 00:14:04.672 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81714 00:14:04.931 00:14:04.931 real 0m1.387s 00:14:04.931 user 0m1.490s 00:14:04.931 sys 0m0.344s 00:14:04.931 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.931 ************************************ 00:14:04.931 END TEST xnvme_rpc 00:14:04.931 ************************************ 00:14:04.931 20:56:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.931 20:56:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:04.931 20:56:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:04.931 20:56:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.931 20:56:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.931 ************************************ 00:14:04.931 START TEST xnvme_bdevperf 00:14:04.931 ************************************ 00:14:04.931 20:56:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:04.931 20:56:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:04.931 20:56:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:04.931 20:56:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:04.931 20:56:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:04.931 20:56:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:04.931 20:56:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:04.931 20:56:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:04.931 { 00:14:04.931 "subsystems": [ 00:14:04.931 { 00:14:04.931 "subsystem": "bdev", 00:14:04.931 "config": [ 00:14:04.931 { 00:14:04.931 "params": { 00:14:04.931 "io_mechanism": "io_uring", 00:14:04.931 "conserve_cpu": true, 00:14:04.931 "filename": "/dev/nvme0n1", 00:14:04.931 "name": "xnvme_bdev" 00:14:04.931 }, 00:14:04.931 "method": "bdev_xnvme_create" 00:14:04.931 }, 00:14:04.931 { 00:14:04.931 "method": "bdev_wait_for_examine" 00:14:04.931 } 00:14:04.931 ] 00:14:04.931 } 00:14:04.931 ] 00:14:04.931 } 00:14:04.931 [2024-11-20 20:56:23.032453] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:04.931 [2024-11-20 20:56:23.032570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81766 ] 00:14:05.190 [2024-11-20 20:56:23.169918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.190 [2024-11-20 20:56:23.194825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.190 Running I/O for 5 seconds... 00:14:07.518 46271.00 IOPS, 180.75 MiB/s [2024-11-20T20:56:26.580Z] 44895.50 IOPS, 175.37 MiB/s [2024-11-20T20:56:27.588Z] 43775.67 IOPS, 171.00 MiB/s [2024-11-20T20:56:28.528Z] 43711.75 IOPS, 170.75 MiB/s [2024-11-20T20:56:28.528Z] 43315.00 IOPS, 169.20 MiB/s 00:14:10.409 Latency(us) 00:14:10.409 [2024-11-20T20:56:28.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.409 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:10.409 xnvme_bdev : 5.00 43299.03 169.14 0.00 0.00 1474.75 674.26 3680.10 00:14:10.409 [2024-11-20T20:56:28.528Z] =================================================================================================================== 00:14:10.409 [2024-11-20T20:56:28.529Z] Total : 43299.03 169.14 0.00 0.00 1474.75 674.26 3680.10 00:14:10.669 20:56:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:10.670 20:56:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:10.670 20:56:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:10.670 20:56:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:10.670 20:56:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:10.670 { 00:14:10.670 "subsystems": [ 00:14:10.670 { 00:14:10.670 "subsystem": "bdev", 00:14:10.670 "config": [ 00:14:10.670 { 00:14:10.670 "params": { 00:14:10.670 "io_mechanism": "io_uring", 00:14:10.670 "conserve_cpu": true, 00:14:10.670 "filename": "/dev/nvme0n1", 00:14:10.670 "name": "xnvme_bdev" 00:14:10.670 }, 00:14:10.670 "method": "bdev_xnvme_create" 00:14:10.670 }, 00:14:10.670 { 00:14:10.670 "method": "bdev_wait_for_examine" 00:14:10.670 } 00:14:10.670 ] 00:14:10.670 } 00:14:10.670 ] 00:14:10.670 } 00:14:10.670 [2024-11-20 20:56:28.603914] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:10.670 [2024-11-20 20:56:28.604053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81836 ] 00:14:10.670 [2024-11-20 20:56:28.747680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.930 [2024-11-20 20:56:28.786291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.930 Running I/O for 5 seconds... 00:14:13.253 40999.00 IOPS, 160.15 MiB/s [2024-11-20T20:56:31.943Z] 40250.50 IOPS, 157.23 MiB/s [2024-11-20T20:56:33.331Z] 40928.33 IOPS, 159.88 MiB/s [2024-11-20T20:56:34.274Z] 41368.25 IOPS, 161.59 MiB/s 00:14:16.155 Latency(us) 00:14:16.155 [2024-11-20T20:56:34.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.155 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:16.155 xnvme_bdev : 5.00 41724.77 162.99 0.00 0.00 1529.74 563.99 7309.78 00:14:16.155 [2024-11-20T20:56:34.274Z] =================================================================================================================== 00:14:16.155 [2024-11-20T20:56:34.274Z] Total : 41724.77 162.99 0.00 0.00 1529.74 563.99 7309.78 00:14:16.155 00:14:16.155 real 0m11.219s 00:14:16.155 user 0m7.139s 00:14:16.155 sys 0m3.374s 00:14:16.155 20:56:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.155 ************************************ 00:14:16.155 END TEST xnvme_bdevperf 00:14:16.155 ************************************ 00:14:16.155 20:56:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:16.155 20:56:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:16.155 20:56:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:16.155 20:56:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.155 20:56:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:16.155 ************************************ 00:14:16.155 START TEST xnvme_fio_plugin 00:14:16.155 ************************************ 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:16.155 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:16.416 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:16.416 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:16.416 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:16.416 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:16.416 20:56:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:16.416 { 00:14:16.416 "subsystems": [ 00:14:16.416 { 00:14:16.416 "subsystem": "bdev", 00:14:16.416 "config": [ 00:14:16.416 { 00:14:16.416 "params": { 00:14:16.416 "io_mechanism": "io_uring", 00:14:16.416 "conserve_cpu": true, 00:14:16.416 "filename": "/dev/nvme0n1", 00:14:16.416 "name": "xnvme_bdev" 00:14:16.416 }, 00:14:16.416 "method": "bdev_xnvme_create" 00:14:16.416 }, 00:14:16.416 { 00:14:16.416 "method": "bdev_wait_for_examine" 00:14:16.416 } 00:14:16.416 ] 00:14:16.416 } 00:14:16.416 ] 00:14:16.416 } 00:14:16.416 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:16.416 fio-3.35 00:14:16.416 Starting 1 thread 00:14:23.008 00:14:23.008 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81944: Wed Nov 20 20:56:39 2024 00:14:23.008 read: IOPS=37.0k, BW=144MiB/s (151MB/s)(722MiB/5001msec) 00:14:23.008 slat (usec): min=2, max=136, avg= 3.85, stdev= 2.06 00:14:23.008 clat (usec): min=1015, max=3324, avg=1573.19, stdev=201.69 00:14:23.008 lat (usec): min=1018, max=3358, avg=1577.03, stdev=202.15 00:14:23.008 clat percentiles (usec): 00:14:23.008 | 1.00th=[ 1237], 5.00th=[ 1319], 10.00th=[ 1369], 20.00th=[ 1418], 00:14:23.008 | 30.00th=[ 1467], 40.00th=[ 1500], 50.00th=[ 1532], 60.00th=[ 1582], 00:14:23.008 | 70.00th=[ 1631], 80.00th=[ 1713], 90.00th=[ 1844], 95.00th=[ 1958], 00:14:23.008 | 99.00th=[ 2212], 99.50th=[ 2343], 99.90th=[ 2671], 99.95th=[ 2835], 00:14:23.008 | 99.99th=[ 3195] 00:14:23.008 bw ( KiB/s): min=142336, max=152064, per=100.00%, avg=148081.78, stdev=3048.20, samples=9 00:14:23.008 iops : min=35584, max=38016, avg=37020.44, stdev=762.05, samples=9 00:14:23.008 lat (msec) : 2=96.05%, 4=3.95% 00:14:23.008 cpu : usr=34.98%, sys=60.46%, ctx=18, majf=0, minf=1063 00:14:23.008 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:23.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.008 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:23.008 issued rwts: total=184896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.008 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:23.008 00:14:23.008 Run status group 0 (all jobs): 00:14:23.008 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=722MiB (757MB), run=5001-5001msec 00:14:23.008 ----------------------------------------------------- 00:14:23.008 Suppressions used: 00:14:23.008 count bytes template 00:14:23.009 1 11 /usr/src/fio/parse.c 00:14:23.009 1 8 libtcmalloc_minimal.so 00:14:23.009 1 904 libcrypto.so 00:14:23.009 ----------------------------------------------------- 00:14:23.009 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:23.009 20:56:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:23.009 { 00:14:23.009 "subsystems": [ 00:14:23.009 { 00:14:23.009 "subsystem": "bdev", 00:14:23.009 "config": [ 00:14:23.009 { 00:14:23.009 "params": { 00:14:23.009 "io_mechanism": "io_uring", 00:14:23.009 "conserve_cpu": true, 00:14:23.009 "filename": "/dev/nvme0n1", 00:14:23.009 "name": "xnvme_bdev" 00:14:23.009 }, 00:14:23.009 "method": "bdev_xnvme_create" 00:14:23.009 }, 00:14:23.009 { 00:14:23.009 "method": "bdev_wait_for_examine" 00:14:23.009 } 00:14:23.009 ] 00:14:23.009 } 00:14:23.009 ] 00:14:23.009 } 00:14:23.009 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:23.009 fio-3.35 00:14:23.009 Starting 1 thread 00:14:28.303 00:14:28.303 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82030: Wed Nov 20 20:56:46 2024 00:14:28.303 write: IOPS=37.1k, BW=145MiB/s (152MB/s)(725MiB/5001msec); 0 zone resets 00:14:28.303 slat (usec): min=2, max=159, avg= 4.21, stdev= 2.12 00:14:28.303 clat (usec): min=156, max=9987, avg=1554.15, stdev=219.53 00:14:28.303 lat (usec): min=162, max=9992, avg=1558.36, stdev=219.86 00:14:28.303 clat percentiles (usec): 00:14:28.303 | 1.00th=[ 1237], 5.00th=[ 1303], 10.00th=[ 1336], 20.00th=[ 1401], 00:14:28.303 | 30.00th=[ 1434], 40.00th=[ 1483], 50.00th=[ 1516], 60.00th=[ 1565], 00:14:28.303 | 70.00th=[ 1614], 80.00th=[ 1696], 90.00th=[ 1811], 95.00th=[ 1926], 00:14:28.303 | 99.00th=[ 2180], 99.50th=[ 2278], 99.90th=[ 2769], 99.95th=[ 3064], 00:14:28.303 | 99.99th=[ 6980] 00:14:28.303 bw ( KiB/s): min=142848, max=151984, per=100.00%, avg=149168.89, stdev=2819.78, samples=9 00:14:28.303 iops : min=35712, max=37996, avg=37292.22, stdev=704.94, samples=9 00:14:28.303 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.05% 00:14:28.303 lat (msec) : 2=96.65%, 4=3.23%, 10=0.02% 00:14:28.303 cpu : usr=38.28%, sys=57.16%, ctx=13, majf=0, minf=1063 00:14:28.303 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:28.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:28.303 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:28.303 issued rwts: total=0,185606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:28.303 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:28.303 00:14:28.303 Run status group 0 (all jobs): 00:14:28.303 WRITE: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=725MiB (760MB), run=5001-5001msec 00:14:28.565 ----------------------------------------------------- 00:14:28.565 Suppressions used: 00:14:28.565 count bytes template 00:14:28.565 1 11 /usr/src/fio/parse.c 00:14:28.565 1 8 libtcmalloc_minimal.so 00:14:28.565 1 904 libcrypto.so 00:14:28.565 ----------------------------------------------------- 00:14:28.565 00:14:28.565 ************************************ 00:14:28.565 END TEST xnvme_fio_plugin 00:14:28.565 ************************************ 00:14:28.565 00:14:28.565 real 0m12.252s 00:14:28.565 user 0m4.944s 00:14:28.565 sys 0m6.532s 00:14:28.565 20:56:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.565 20:56:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:28.565 20:56:46 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:28.566 20:56:46 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:28.566 20:56:46 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:28.566 20:56:46 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:28.566 20:56:46 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:28.566 20:56:46 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:28.566 20:56:46 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:28.566 20:56:46 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:28.566 20:56:46 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:28.566 20:56:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:28.566 20:56:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.566 20:56:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.566 ************************************ 00:14:28.566 START TEST xnvme_rpc 00:14:28.566 ************************************ 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:28.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82105 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82105 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82105 ']' 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.566 20:56:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:28.566 [2024-11-20 20:56:46.670190] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:28.566 [2024-11-20 20:56:46.670543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82105 ] 00:14:28.828 [2024-11-20 20:56:46.813404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.828 [2024-11-20 20:56:46.855012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.772 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.772 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:29.772 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:29.772 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.772 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.773 xnvme_bdev 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82105 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82105 ']' 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82105 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82105 00:14:29.773 killing process with pid 82105 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82105' 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82105 00:14:29.773 20:56:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82105 00:14:30.347 00:14:30.347 real 0m1.610s 00:14:30.347 user 0m1.590s 00:14:30.347 sys 0m0.516s 00:14:30.347 20:56:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.347 20:56:48 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:30.347 ************************************ 00:14:30.347 END TEST xnvme_rpc 00:14:30.347 ************************************ 00:14:30.347 20:56:48 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:30.347 20:56:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:30.347 20:56:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.347 20:56:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:30.347 ************************************ 00:14:30.347 START TEST xnvme_bdevperf 00:14:30.347 ************************************ 00:14:30.347 20:56:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:30.347 20:56:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:30.347 20:56:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:30.347 20:56:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:30.347 20:56:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:30.347 20:56:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:30.347 20:56:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:30.347 20:56:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:30.347 { 00:14:30.347 "subsystems": [ 00:14:30.347 { 00:14:30.347 "subsystem": "bdev", 00:14:30.347 "config": [ 00:14:30.347 { 00:14:30.347 "params": { 00:14:30.347 "io_mechanism": "io_uring_cmd", 00:14:30.347 "conserve_cpu": false, 00:14:30.347 "filename": "/dev/ng0n1", 00:14:30.347 "name": "xnvme_bdev" 00:14:30.347 }, 00:14:30.347 "method": "bdev_xnvme_create" 00:14:30.347 }, 00:14:30.347 { 00:14:30.347 "method": "bdev_wait_for_examine" 00:14:30.347 } 00:14:30.347 ] 00:14:30.347 } 00:14:30.347 ] 00:14:30.347 } 00:14:30.347 [2024-11-20 20:56:48.337391] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:30.347 [2024-11-20 20:56:48.337823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82163 ] 00:14:30.607 [2024-11-20 20:56:48.488566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.607 [2024-11-20 20:56:48.527620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.607 Running I/O for 5 seconds... 00:14:32.962 40354.00 IOPS, 157.63 MiB/s [2024-11-20T20:56:52.025Z] 40247.00 IOPS, 157.21 MiB/s [2024-11-20T20:56:52.970Z] 40466.00 IOPS, 158.07 MiB/s [2024-11-20T20:56:53.915Z] 40345.50 IOPS, 157.60 MiB/s [2024-11-20T20:56:53.915Z] 40357.80 IOPS, 157.65 MiB/s 00:14:35.796 Latency(us) 00:14:35.796 [2024-11-20T20:56:53.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.796 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:35.796 xnvme_bdev : 5.01 40328.25 157.53 0.00 0.00 1583.64 322.95 19862.45 00:14:35.796 [2024-11-20T20:56:53.915Z] =================================================================================================================== 00:14:35.796 [2024-11-20T20:56:53.915Z] Total : 40328.25 157.53 0.00 0.00 1583.64 322.95 19862.45 00:14:36.057 20:56:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:36.057 20:56:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:36.057 20:56:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:36.057 20:56:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:36.058 20:56:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:36.058 { 00:14:36.058 "subsystems": [ 00:14:36.058 { 00:14:36.058 "subsystem": "bdev", 00:14:36.058 "config": [ 00:14:36.058 { 00:14:36.058 "params": { 00:14:36.058 "io_mechanism": "io_uring_cmd", 00:14:36.058 "conserve_cpu": false, 00:14:36.058 "filename": "/dev/ng0n1", 00:14:36.058 "name": "xnvme_bdev" 00:14:36.058 }, 00:14:36.058 "method": "bdev_xnvme_create" 00:14:36.058 }, 00:14:36.058 { 00:14:36.058 "method": "bdev_wait_for_examine" 00:14:36.058 } 00:14:36.058 ] 00:14:36.058 } 00:14:36.058 ] 00:14:36.058 } 00:14:36.058 [2024-11-20 20:56:54.002892] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:36.058 [2024-11-20 20:56:54.003030] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82231 ] 00:14:36.058 [2024-11-20 20:56:54.144636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.320 [2024-11-20 20:56:54.182221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.320 Running I/O for 5 seconds... 00:14:38.207 17566.00 IOPS, 68.62 MiB/s [2024-11-20T20:56:57.714Z] 17134.00 IOPS, 66.93 MiB/s [2024-11-20T20:56:58.654Z] 17167.67 IOPS, 67.06 MiB/s [2024-11-20T20:56:59.595Z] 17309.50 IOPS, 67.62 MiB/s [2024-11-20T20:56:59.595Z] 17478.20 IOPS, 68.27 MiB/s 00:14:41.476 Latency(us) 00:14:41.476 [2024-11-20T20:56:59.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.476 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:41.477 xnvme_bdev : 5.01 17464.66 68.22 0.00 0.00 3657.48 70.10 23895.43 00:14:41.477 [2024-11-20T20:56:59.596Z] =================================================================================================================== 00:14:41.477 [2024-11-20T20:56:59.596Z] Total : 17464.66 68.22 0.00 0.00 3657.48 70.10 23895.43 00:14:41.477 20:56:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:41.477 20:56:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:41.477 20:56:59 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:41.477 20:56:59 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:41.477 20:56:59 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:41.737 { 00:14:41.737 "subsystems": [ 00:14:41.737 { 00:14:41.737 "subsystem": "bdev", 00:14:41.737 "config": [ 00:14:41.737 { 00:14:41.737 "params": { 00:14:41.737 "io_mechanism": "io_uring_cmd", 00:14:41.737 "conserve_cpu": false, 00:14:41.737 "filename": "/dev/ng0n1", 00:14:41.738 "name": "xnvme_bdev" 00:14:41.738 }, 00:14:41.738 "method": "bdev_xnvme_create" 00:14:41.738 }, 00:14:41.738 { 00:14:41.738 "method": "bdev_wait_for_examine" 00:14:41.738 } 00:14:41.738 ] 00:14:41.738 } 00:14:41.738 ] 00:14:41.738 } 00:14:41.738 [2024-11-20 20:56:59.659727] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:41.738 [2024-11-20 20:56:59.660082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82300 ] 00:14:41.738 [2024-11-20 20:56:59.806428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.738 [2024-11-20 20:56:59.843347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.998 Running I/O for 5 seconds... 00:14:43.883 71872.00 IOPS, 280.75 MiB/s [2024-11-20T20:57:03.376Z] 73760.00 IOPS, 288.12 MiB/s [2024-11-20T20:57:04.310Z] 81088.00 IOPS, 316.75 MiB/s [2024-11-20T20:57:05.246Z] 84624.00 IOPS, 330.56 MiB/s [2024-11-20T20:57:05.246Z] 86720.00 IOPS, 338.75 MiB/s 00:14:47.127 Latency(us) 00:14:47.127 [2024-11-20T20:57:05.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.127 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:47.127 xnvme_bdev : 5.00 86695.82 338.66 0.00 0.00 735.09 434.81 2671.85 00:14:47.127 [2024-11-20T20:57:05.246Z] =================================================================================================================== 00:14:47.127 [2024-11-20T20:57:05.246Z] Total : 86695.82 338.66 0.00 0.00 735.09 434.81 2671.85 00:14:47.127 20:57:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:47.127 20:57:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:47.127 20:57:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:47.127 20:57:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:47.127 20:57:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:47.127 { 00:14:47.127 "subsystems": [ 00:14:47.127 { 00:14:47.127 "subsystem": "bdev", 00:14:47.127 "config": [ 00:14:47.127 { 00:14:47.127 "params": { 00:14:47.127 "io_mechanism": "io_uring_cmd", 00:14:47.127 "conserve_cpu": false, 00:14:47.127 "filename": "/dev/ng0n1", 00:14:47.127 "name": "xnvme_bdev" 00:14:47.127 }, 00:14:47.127 "method": "bdev_xnvme_create" 00:14:47.127 }, 00:14:47.127 { 00:14:47.127 "method": "bdev_wait_for_examine" 00:14:47.127 } 00:14:47.127 ] 00:14:47.127 } 00:14:47.127 ] 00:14:47.127 } 00:14:47.127 [2024-11-20 20:57:05.199411] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:14:47.127 [2024-11-20 20:57:05.199646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82363 ] 00:14:47.386 [2024-11-20 20:57:05.340174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.386 [2024-11-20 20:57:05.361675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.386 Running I/O for 5 seconds... 00:14:49.329 229.00 IOPS, 0.89 MiB/s [2024-11-20T20:57:08.823Z] 234.00 IOPS, 0.91 MiB/s [2024-11-20T20:57:09.821Z] 245.33 IOPS, 0.96 MiB/s [2024-11-20T20:57:10.453Z] 227.00 IOPS, 0.89 MiB/s [2024-11-20T20:57:11.023Z] 1835.80 IOPS, 7.17 MiB/s 00:14:52.904 Latency(us) 00:14:52.904 [2024-11-20T20:57:11.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.904 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:52.904 xnvme_bdev : 5.40 1712.19 6.69 0.00 0.00 36035.76 220.55 664635.86 00:14:52.904 [2024-11-20T20:57:11.023Z] =================================================================================================================== 00:14:52.904 [2024-11-20T20:57:11.023Z] Total : 1712.19 6.69 0.00 0.00 36035.76 220.55 664635.86 00:14:53.165 00:14:53.165 real 0m22.832s 00:14:53.165 user 0m12.161s 00:14:53.165 sys 0m10.218s 00:14:53.165 20:57:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.165 20:57:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 ************************************ 00:14:53.165 END TEST xnvme_bdevperf 00:14:53.165 ************************************ 00:14:53.165 20:57:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:53.165 20:57:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.165 20:57:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.165 20:57:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 ************************************ 00:14:53.165 START TEST xnvme_fio_plugin 00:14:53.165 ************************************ 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:53.165 20:57:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:53.165 { 00:14:53.165 "subsystems": [ 00:14:53.165 { 00:14:53.165 "subsystem": "bdev", 00:14:53.165 "config": [ 00:14:53.165 { 00:14:53.165 "params": { 00:14:53.165 "io_mechanism": "io_uring_cmd", 00:14:53.165 "conserve_cpu": false, 00:14:53.166 "filename": "/dev/ng0n1", 00:14:53.166 "name": "xnvme_bdev" 00:14:53.166 }, 00:14:53.166 "method": "bdev_xnvme_create" 00:14:53.166 }, 00:14:53.166 { 00:14:53.166 "method": "bdev_wait_for_examine" 00:14:53.166 } 00:14:53.166 ] 00:14:53.166 } 00:14:53.166 ] 00:14:53.166 } 00:14:53.425 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:53.425 fio-3.35 00:14:53.425 Starting 1 thread 00:14:58.712 00:14:58.712 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82476: Wed Nov 20 20:57:16 2024 00:14:58.712 read: IOPS=45.2k, BW=176MiB/s (185MB/s)(882MiB/5002msec) 00:14:58.712 slat (usec): min=2, max=115, avg= 3.31, stdev= 1.53 00:14:58.712 clat (usec): min=571, max=3349, avg=1285.41, stdev=300.07 00:14:58.712 lat (usec): min=574, max=3383, avg=1288.72, stdev=300.48 00:14:58.712 clat percentiles (usec): 00:14:58.712 | 1.00th=[ 791], 5.00th=[ 873], 10.00th=[ 930], 20.00th=[ 1012], 00:14:58.712 | 30.00th=[ 1090], 40.00th=[ 1172], 50.00th=[ 1254], 60.00th=[ 1352], 00:14:58.712 | 70.00th=[ 1434], 80.00th=[ 1532], 90.00th=[ 1663], 95.00th=[ 1811], 00:14:58.712 | 99.00th=[ 2180], 99.50th=[ 2311], 99.90th=[ 2540], 99.95th=[ 2638], 00:14:58.712 | 99.99th=[ 3130] 00:14:58.712 bw ( KiB/s): min=152576, max=229384, per=100.00%, avg=183793.78, stdev=29274.28, samples=9 00:14:58.712 iops : min=38144, max=57350, avg=45948.44, stdev=7318.57, samples=9 00:14:58.712 lat (usec) : 750=0.23%, 1000=18.73% 00:14:58.712 lat (msec) : 2=79.02%, 4=2.02% 00:14:58.712 cpu : usr=38.73%, sys=60.19%, ctx=11, majf=0, minf=1063 00:14:58.712 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:58.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.712 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:58.712 issued rwts: total=225902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.712 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:58.712 00:14:58.712 Run status group 0 (all jobs): 00:14:58.712 READ: bw=176MiB/s (185MB/s), 176MiB/s-176MiB/s (185MB/s-185MB/s), io=882MiB (925MB), run=5002-5002msec 00:14:59.285 ----------------------------------------------------- 00:14:59.285 Suppressions used: 00:14:59.285 count bytes template 00:14:59.285 1 11 /usr/src/fio/parse.c 00:14:59.285 1 8 libtcmalloc_minimal.so 00:14:59.285 1 904 libcrypto.so 00:14:59.285 ----------------------------------------------------- 00:14:59.285 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.285 20:57:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.285 { 00:14:59.285 "subsystems": [ 00:14:59.285 { 00:14:59.285 "subsystem": "bdev", 00:14:59.285 "config": [ 00:14:59.285 { 00:14:59.285 "params": { 00:14:59.285 "io_mechanism": "io_uring_cmd", 00:14:59.285 "conserve_cpu": false, 00:14:59.285 "filename": "/dev/ng0n1", 00:14:59.285 "name": "xnvme_bdev" 00:14:59.285 }, 00:14:59.285 "method": "bdev_xnvme_create" 00:14:59.285 }, 00:14:59.285 { 00:14:59.285 "method": "bdev_wait_for_examine" 00:14:59.285 } 00:14:59.285 ] 00:14:59.285 } 00:14:59.285 ] 00:14:59.285 } 00:14:59.545 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:59.545 fio-3.35 00:14:59.545 Starting 1 thread 00:15:04.840 00:15:04.840 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82556: Wed Nov 20 20:57:22 2024 00:15:04.840 write: IOPS=37.3k, BW=146MiB/s (153MB/s)(728MiB/5001msec); 0 zone resets 00:15:04.840 slat (nsec): min=2834, max=99770, avg=4203.80, stdev=2156.55 00:15:04.840 clat (usec): min=207, max=7537, avg=1547.13, stdev=229.67 00:15:04.840 lat (usec): min=212, max=7540, avg=1551.33, stdev=230.05 00:15:04.840 clat percentiles (usec): 00:15:04.840 | 1.00th=[ 1123], 5.00th=[ 1237], 10.00th=[ 1303], 20.00th=[ 1369], 00:15:04.840 | 30.00th=[ 1418], 40.00th=[ 1467], 50.00th=[ 1516], 60.00th=[ 1565], 00:15:04.840 | 70.00th=[ 1631], 80.00th=[ 1713], 90.00th=[ 1827], 95.00th=[ 1942], 00:15:04.840 | 99.00th=[ 2245], 99.50th=[ 2376], 99.90th=[ 2933], 99.95th=[ 3294], 00:15:04.840 | 99.99th=[ 3752] 00:15:04.840 bw ( KiB/s): min=141688, max=158592, per=100.00%, avg=149147.56, stdev=4691.01, samples=9 00:15:04.840 iops : min=35422, max=39648, avg=37286.89, stdev=1172.75, samples=9 00:15:04.840 lat (usec) : 250=0.01%, 500=0.04%, 750=0.06%, 1000=0.10% 00:15:04.840 lat (msec) : 2=96.35%, 4=3.44%, 10=0.01% 00:15:04.840 cpu : usr=35.24%, sys=63.36%, ctx=8, majf=0, minf=1063 00:15:04.840 IO depths : 1=1.5%, 2=3.1%, 4=6.1%, 8=12.4%, 16=24.9%, 32=50.4%, >=64=1.6% 00:15:04.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.840 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:04.840 issued rwts: total=0,186463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.840 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:04.840 00:15:04.840 Run status group 0 (all jobs): 00:15:04.840 WRITE: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=728MiB (764MB), run=5001-5001msec 00:15:05.413 ----------------------------------------------------- 00:15:05.413 Suppressions used: 00:15:05.413 count bytes template 00:15:05.413 1 11 /usr/src/fio/parse.c 00:15:05.413 1 8 libtcmalloc_minimal.so 00:15:05.413 1 904 libcrypto.so 00:15:05.413 ----------------------------------------------------- 00:15:05.413 00:15:05.413 ************************************ 00:15:05.413 END TEST xnvme_fio_plugin 00:15:05.413 ************************************ 00:15:05.413 00:15:05.413 real 0m12.159s 00:15:05.413 user 0m4.937s 00:15:05.413 sys 0m6.788s 00:15:05.413 20:57:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.413 20:57:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:05.413 20:57:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:05.413 20:57:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:05.413 20:57:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:05.413 20:57:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:05.413 20:57:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:05.413 20:57:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.413 20:57:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.413 ************************************ 00:15:05.413 START TEST xnvme_rpc 00:15:05.413 ************************************ 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:05.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82635 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82635 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82635 ']' 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.413 20:57:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:05.413 [2024-11-20 20:57:23.497274] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:05.413 [2024-11-20 20:57:23.497772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82635 ] 00:15:05.675 [2024-11-20 20:57:23.646152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.675 [2024-11-20 20:57:23.686625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.247 xnvme_bdev 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.247 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.508 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.508 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:06.508 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:06.508 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:06.508 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82635 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82635 ']' 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82635 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82635 00:15:06.509 killing process with pid 82635 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82635' 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82635 00:15:06.509 20:57:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82635 00:15:07.081 00:15:07.081 real 0m1.608s 00:15:07.081 user 0m1.580s 00:15:07.081 sys 0m0.502s 00:15:07.082 20:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:07.082 ************************************ 00:15:07.082 END TEST xnvme_rpc 00:15:07.082 ************************************ 00:15:07.082 20:57:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.082 20:57:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:07.082 20:57:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:07.082 20:57:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.082 20:57:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.082 ************************************ 00:15:07.082 START TEST xnvme_bdevperf 00:15:07.082 ************************************ 00:15:07.082 20:57:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:07.082 20:57:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:07.082 20:57:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:07.082 20:57:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:07.082 20:57:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:07.082 20:57:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:07.082 20:57:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:07.082 20:57:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:07.082 { 00:15:07.082 "subsystems": [ 00:15:07.082 { 00:15:07.082 "subsystem": "bdev", 00:15:07.082 "config": [ 00:15:07.082 { 00:15:07.082 "params": { 00:15:07.082 "io_mechanism": "io_uring_cmd", 00:15:07.082 "conserve_cpu": true, 00:15:07.082 "filename": "/dev/ng0n1", 00:15:07.082 "name": "xnvme_bdev" 00:15:07.082 }, 00:15:07.082 "method": "bdev_xnvme_create" 00:15:07.082 }, 00:15:07.082 { 00:15:07.082 "method": "bdev_wait_for_examine" 00:15:07.082 } 00:15:07.082 ] 00:15:07.082 } 00:15:07.082 ] 00:15:07.082 } 00:15:07.082 [2024-11-20 20:57:25.144761] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:07.082 [2024-11-20 20:57:25.144894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82693 ] 00:15:07.343 [2024-11-20 20:57:25.293624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.343 [2024-11-20 20:57:25.332950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.603 Running I/O for 5 seconds... 00:15:09.488 45248.00 IOPS, 176.75 MiB/s [2024-11-20T20:57:28.549Z] 45440.00 IOPS, 177.50 MiB/s [2024-11-20T20:57:29.489Z] 44906.67 IOPS, 175.42 MiB/s [2024-11-20T20:57:30.875Z] 44576.00 IOPS, 174.12 MiB/s [2024-11-20T20:57:30.875Z] 44185.60 IOPS, 172.60 MiB/s 00:15:12.756 Latency(us) 00:15:12.756 [2024-11-20T20:57:30.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.756 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:12.756 xnvme_bdev : 5.00 44181.87 172.59 0.00 0.00 1445.36 812.90 3150.77 00:15:12.756 [2024-11-20T20:57:30.875Z] =================================================================================================================== 00:15:12.756 [2024-11-20T20:57:30.875Z] Total : 44181.87 172.59 0.00 0.00 1445.36 812.90 3150.77 00:15:12.756 20:57:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:12.756 20:57:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:12.756 20:57:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:12.756 20:57:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:12.756 20:57:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:12.756 { 00:15:12.756 "subsystems": [ 00:15:12.756 { 00:15:12.756 "subsystem": "bdev", 00:15:12.756 "config": [ 00:15:12.756 { 00:15:12.756 "params": { 00:15:12.756 "io_mechanism": "io_uring_cmd", 00:15:12.756 "conserve_cpu": true, 00:15:12.756 "filename": "/dev/ng0n1", 00:15:12.756 "name": "xnvme_bdev" 00:15:12.756 }, 00:15:12.756 "method": "bdev_xnvme_create" 00:15:12.756 }, 00:15:12.756 { 00:15:12.756 "method": "bdev_wait_for_examine" 00:15:12.756 } 00:15:12.756 ] 00:15:12.756 } 00:15:12.756 ] 00:15:12.756 } 00:15:12.756 [2024-11-20 20:57:30.803997] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:12.756 [2024-11-20 20:57:30.804136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82756 ] 00:15:13.017 [2024-11-20 20:57:30.950606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.017 [2024-11-20 20:57:31.002059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.277 Running I/O for 5 seconds... 00:15:15.162 43366.00 IOPS, 169.40 MiB/s [2024-11-20T20:57:34.226Z] 44351.50 IOPS, 173.25 MiB/s [2024-11-20T20:57:35.168Z] 44669.00 IOPS, 174.49 MiB/s [2024-11-20T20:57:36.551Z] 44119.50 IOPS, 172.34 MiB/s [2024-11-20T20:57:36.551Z] 39672.80 IOPS, 154.97 MiB/s 00:15:18.432 Latency(us) 00:15:18.432 [2024-11-20T20:57:36.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.432 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:18.432 xnvme_bdev : 5.01 39606.32 154.71 0.00 0.00 1610.72 84.28 14922.04 00:15:18.432 [2024-11-20T20:57:36.551Z] =================================================================================================================== 00:15:18.432 [2024-11-20T20:57:36.551Z] Total : 39606.32 154.71 0.00 0.00 1610.72 84.28 14922.04 00:15:18.432 20:57:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:18.432 20:57:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:18.432 20:57:36 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:18.432 20:57:36 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:18.432 20:57:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:18.432 { 00:15:18.432 "subsystems": [ 00:15:18.432 { 00:15:18.432 "subsystem": "bdev", 00:15:18.432 "config": [ 00:15:18.432 { 00:15:18.432 "params": { 00:15:18.432 "io_mechanism": "io_uring_cmd", 00:15:18.432 "conserve_cpu": true, 00:15:18.432 "filename": "/dev/ng0n1", 00:15:18.432 "name": "xnvme_bdev" 00:15:18.432 }, 00:15:18.432 "method": "bdev_xnvme_create" 00:15:18.432 }, 00:15:18.432 { 00:15:18.432 "method": "bdev_wait_for_examine" 00:15:18.432 } 00:15:18.432 ] 00:15:18.432 } 00:15:18.432 ] 00:15:18.432 } 00:15:18.432 [2024-11-20 20:57:36.506886] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:18.432 [2024-11-20 20:57:36.507216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82825 ] 00:15:18.693 [2024-11-20 20:57:36.654256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.693 [2024-11-20 20:57:36.692052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.953 Running I/O for 5 seconds... 00:15:20.836 72960.00 IOPS, 285.00 MiB/s [2024-11-20T20:57:39.889Z] 72864.00 IOPS, 284.62 MiB/s [2024-11-20T20:57:41.262Z] 77845.33 IOPS, 304.08 MiB/s [2024-11-20T20:57:42.240Z] 82112.00 IOPS, 320.75 MiB/s 00:15:24.121 Latency(us) 00:15:24.121 [2024-11-20T20:57:42.240Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.121 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:24.121 xnvme_bdev : 5.00 84661.03 330.71 0.00 0.00 752.69 392.27 2785.28 00:15:24.121 [2024-11-20T20:57:42.240Z] =================================================================================================================== 00:15:24.121 [2024-11-20T20:57:42.240Z] Total : 84661.03 330.71 0.00 0.00 752.69 392.27 2785.28 00:15:24.121 20:57:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:24.121 20:57:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:24.121 20:57:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:24.121 20:57:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:24.121 20:57:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:24.121 { 00:15:24.121 "subsystems": [ 00:15:24.121 { 00:15:24.121 "subsystem": "bdev", 00:15:24.121 "config": [ 00:15:24.121 { 00:15:24.121 "params": { 00:15:24.121 "io_mechanism": "io_uring_cmd", 00:15:24.121 "conserve_cpu": true, 00:15:24.121 "filename": "/dev/ng0n1", 00:15:24.121 "name": "xnvme_bdev" 00:15:24.121 }, 00:15:24.121 "method": "bdev_xnvme_create" 00:15:24.121 }, 00:15:24.121 { 00:15:24.121 "method": "bdev_wait_for_examine" 00:15:24.121 } 00:15:24.121 ] 00:15:24.121 } 00:15:24.121 ] 00:15:24.121 } 00:15:24.121 [2024-11-20 20:57:42.045932] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:24.121 [2024-11-20 20:57:42.046035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82888 ] 00:15:24.121 [2024-11-20 20:57:42.193283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.411 [2024-11-20 20:57:42.223615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.411 Running I/O for 5 seconds... 00:15:26.309 38831.00 IOPS, 151.68 MiB/s [2024-11-20T20:57:45.367Z] 39366.00 IOPS, 153.77 MiB/s [2024-11-20T20:57:46.742Z] 39470.00 IOPS, 154.18 MiB/s [2024-11-20T20:57:47.682Z] 39992.25 IOPS, 156.22 MiB/s [2024-11-20T20:57:47.682Z] 41800.20 IOPS, 163.28 MiB/s 00:15:29.563 Latency(us) 00:15:29.563 [2024-11-20T20:57:47.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.563 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:29.563 xnvme_bdev : 5.00 41783.39 163.22 0.00 0.00 1526.73 124.46 21374.82 00:15:29.563 [2024-11-20T20:57:47.682Z] =================================================================================================================== 00:15:29.563 [2024-11-20T20:57:47.682Z] Total : 41783.39 163.22 0.00 0.00 1526.73 124.46 21374.82 00:15:29.563 00:15:29.563 real 0m22.482s 00:15:29.563 user 0m16.145s 00:15:29.563 sys 0m3.992s 00:15:29.563 ************************************ 00:15:29.563 END TEST xnvme_bdevperf 00:15:29.563 ************************************ 00:15:29.563 20:57:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.563 20:57:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:29.563 20:57:47 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:29.563 20:57:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:29.563 20:57:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.563 20:57:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:29.563 ************************************ 00:15:29.563 START TEST xnvme_fio_plugin 00:15:29.563 ************************************ 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:29.563 20:57:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:29.563 { 00:15:29.563 "subsystems": [ 00:15:29.563 { 00:15:29.563 "subsystem": "bdev", 00:15:29.563 "config": [ 00:15:29.563 { 00:15:29.563 "params": { 00:15:29.563 "io_mechanism": "io_uring_cmd", 00:15:29.563 "conserve_cpu": true, 00:15:29.563 "filename": "/dev/ng0n1", 00:15:29.563 "name": "xnvme_bdev" 00:15:29.563 }, 00:15:29.563 "method": "bdev_xnvme_create" 00:15:29.563 }, 00:15:29.563 { 00:15:29.563 "method": "bdev_wait_for_examine" 00:15:29.563 } 00:15:29.563 ] 00:15:29.563 } 00:15:29.563 ] 00:15:29.563 } 00:15:29.824 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:29.824 fio-3.35 00:15:29.824 Starting 1 thread 00:15:36.405 00:15:36.405 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82995: Wed Nov 20 20:57:53 2024 00:15:36.405 read: IOPS=44.2k, BW=173MiB/s (181MB/s)(865MiB/5002msec) 00:15:36.405 slat (usec): min=2, max=167, avg= 3.25, stdev= 1.51 00:15:36.405 clat (usec): min=694, max=3811, avg=1316.11, stdev=289.41 00:15:36.405 lat (usec): min=697, max=3874, avg=1319.36, stdev=289.78 00:15:36.405 clat percentiles (usec): 00:15:36.405 | 1.00th=[ 832], 5.00th=[ 922], 10.00th=[ 971], 20.00th=[ 1045], 00:15:36.405 | 30.00th=[ 1106], 40.00th=[ 1205], 50.00th=[ 1303], 60.00th=[ 1385], 00:15:36.405 | 70.00th=[ 1467], 80.00th=[ 1565], 90.00th=[ 1696], 95.00th=[ 1811], 00:15:36.405 | 99.00th=[ 2073], 99.50th=[ 2180], 99.90th=[ 2409], 99.95th=[ 2474], 00:15:36.405 | 99.99th=[ 3589] 00:15:36.405 bw ( KiB/s): min=144384, max=226304, per=100.00%, avg=179541.33, stdev=29435.55, samples=9 00:15:36.405 iops : min=36096, max=56576, avg=44885.33, stdev=7358.89, samples=9 00:15:36.405 lat (usec) : 750=0.07%, 1000=13.95% 00:15:36.405 lat (msec) : 2=84.34%, 4=1.63% 00:15:36.405 cpu : usr=64.35%, sys=32.55%, ctx=8, majf=0, minf=1063 00:15:36.405 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:36.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.406 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:36.406 issued rwts: total=221312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.406 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:36.406 00:15:36.406 Run status group 0 (all jobs): 00:15:36.406 READ: bw=173MiB/s (181MB/s), 173MiB/s-173MiB/s (181MB/s-181MB/s), io=865MiB (906MB), run=5002-5002msec 00:15:36.406 ----------------------------------------------------- 00:15:36.406 Suppressions used: 00:15:36.406 count bytes template 00:15:36.406 1 11 /usr/src/fio/parse.c 00:15:36.406 1 8 libtcmalloc_minimal.so 00:15:36.406 1 904 libcrypto.so 00:15:36.406 ----------------------------------------------------- 00:15:36.406 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:36.406 20:57:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.406 { 00:15:36.406 "subsystems": [ 00:15:36.406 { 00:15:36.406 "subsystem": "bdev", 00:15:36.406 "config": [ 00:15:36.406 { 00:15:36.406 "params": { 00:15:36.406 "io_mechanism": "io_uring_cmd", 00:15:36.406 "conserve_cpu": true, 00:15:36.406 "filename": "/dev/ng0n1", 00:15:36.406 "name": "xnvme_bdev" 00:15:36.406 }, 00:15:36.406 "method": "bdev_xnvme_create" 00:15:36.406 }, 00:15:36.406 { 00:15:36.406 "method": "bdev_wait_for_examine" 00:15:36.406 } 00:15:36.406 ] 00:15:36.406 } 00:15:36.406 ] 00:15:36.406 } 00:15:36.406 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:36.406 fio-3.35 00:15:36.406 Starting 1 thread 00:15:41.696 00:15:41.696 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83075: Wed Nov 20 20:57:59 2024 00:15:41.696 write: IOPS=40.1k, BW=157MiB/s (164MB/s)(784MiB/5002msec); 0 zone resets 00:15:41.696 slat (usec): min=2, max=273, avg= 4.11, stdev= 2.31 00:15:41.696 clat (usec): min=414, max=4795, avg=1430.71, stdev=214.99 00:15:41.696 lat (usec): min=419, max=4799, avg=1434.82, stdev=215.42 00:15:41.696 clat percentiles (usec): 00:15:41.696 | 1.00th=[ 1074], 5.00th=[ 1156], 10.00th=[ 1205], 20.00th=[ 1270], 00:15:41.696 | 30.00th=[ 1319], 40.00th=[ 1352], 50.00th=[ 1401], 60.00th=[ 1450], 00:15:41.696 | 70.00th=[ 1500], 80.00th=[ 1565], 90.00th=[ 1696], 95.00th=[ 1811], 00:15:41.696 | 99.00th=[ 2089], 99.50th=[ 2245], 99.90th=[ 2933], 99.95th=[ 3228], 00:15:41.696 | 99.99th=[ 3785] 00:15:41.696 bw ( KiB/s): min=152176, max=176512, per=100.00%, avg=160805.78, stdev=7238.09, samples=9 00:15:41.696 iops : min=38044, max=44128, avg=40201.44, stdev=1809.52, samples=9 00:15:41.696 lat (usec) : 500=0.01%, 1000=0.19% 00:15:41.696 lat (msec) : 2=98.27%, 4=1.53%, 10=0.01% 00:15:41.696 cpu : usr=42.25%, sys=53.03%, ctx=9, majf=0, minf=1063 00:15:41.696 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.5%, 16=25.0%, 32=50.3%, >=64=1.6% 00:15:41.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.696 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:41.696 issued rwts: total=0,200696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.696 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:41.696 00:15:41.696 Run status group 0 (all jobs): 00:15:41.696 WRITE: bw=157MiB/s (164MB/s), 157MiB/s-157MiB/s (164MB/s-164MB/s), io=784MiB (822MB), run=5002-5002msec 00:15:41.696 ----------------------------------------------------- 00:15:41.696 Suppressions used: 00:15:41.696 count bytes template 00:15:41.696 1 11 /usr/src/fio/parse.c 00:15:41.696 1 8 libtcmalloc_minimal.so 00:15:41.696 1 904 libcrypto.so 00:15:41.696 ----------------------------------------------------- 00:15:41.696 00:15:41.696 00:15:41.696 real 0m11.997s 00:15:41.696 user 0m6.467s 00:15:41.696 sys 0m4.823s 00:15:41.696 20:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.696 20:57:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:41.696 ************************************ 00:15:41.696 END TEST xnvme_fio_plugin 00:15:41.696 ************************************ 00:15:41.696 20:57:59 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 82635 00:15:41.696 20:57:59 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 82635 ']' 00:15:41.696 Process with pid 82635 is not found 00:15:41.696 20:57:59 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 82635 00:15:41.696 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82635) - No such process 00:15:41.696 20:57:59 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 82635 is not found' 00:15:41.696 00:15:41.696 real 3m1.142s 00:15:41.696 user 1m34.921s 00:15:41.696 sys 1m11.510s 00:15:41.696 20:57:59 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.696 ************************************ 00:15:41.696 END TEST nvme_xnvme 00:15:41.696 ************************************ 00:15:41.696 20:57:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.696 20:57:59 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:41.696 20:57:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:41.696 20:57:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.696 20:57:59 -- common/autotest_common.sh@10 -- # set +x 00:15:41.696 ************************************ 00:15:41.696 START TEST blockdev_xnvme 00:15:41.696 ************************************ 00:15:41.696 20:57:59 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:41.958 * Looking for test storage... 00:15:41.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:41.958 20:57:59 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:41.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.958 --rc genhtml_branch_coverage=1 00:15:41.958 --rc genhtml_function_coverage=1 00:15:41.958 --rc genhtml_legend=1 00:15:41.958 --rc geninfo_all_blocks=1 00:15:41.958 --rc geninfo_unexecuted_blocks=1 00:15:41.958 00:15:41.958 ' 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:41.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.958 --rc genhtml_branch_coverage=1 00:15:41.958 --rc genhtml_function_coverage=1 00:15:41.958 --rc genhtml_legend=1 00:15:41.958 --rc geninfo_all_blocks=1 00:15:41.958 --rc geninfo_unexecuted_blocks=1 00:15:41.958 00:15:41.958 ' 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:41.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.958 --rc genhtml_branch_coverage=1 00:15:41.958 --rc genhtml_function_coverage=1 00:15:41.958 --rc genhtml_legend=1 00:15:41.958 --rc geninfo_all_blocks=1 00:15:41.958 --rc geninfo_unexecuted_blocks=1 00:15:41.958 00:15:41.958 ' 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:41.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:41.958 --rc genhtml_branch_coverage=1 00:15:41.958 --rc genhtml_function_coverage=1 00:15:41.958 --rc genhtml_legend=1 00:15:41.958 --rc geninfo_all_blocks=1 00:15:41.958 --rc geninfo_unexecuted_blocks=1 00:15:41.958 00:15:41.958 ' 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83209 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83209 00:15:41.958 20:57:59 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83209 ']' 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.958 20:57:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:41.958 [2024-11-20 20:57:59.993970] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:41.958 [2024-11-20 20:57:59.994363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83209 ] 00:15:42.219 [2024-11-20 20:58:00.136654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.219 [2024-11-20 20:58:00.168997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.792 20:58:00 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.792 20:58:00 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:42.792 20:58:00 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:42.792 20:58:00 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:15:42.792 20:58:00 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:42.792 20:58:00 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:42.792 20:58:00 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:43.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:43.938 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:43.938 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:43.938 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:43.938 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:43.938 nvme0n1 00:15:43.938 nvme0n2 00:15:43.938 nvme0n3 00:15:43.938 nvme1n1 00:15:43.938 nvme2n1 00:15:43.938 nvme3n1 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.938 20:58:01 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.938 20:58:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.938 20:58:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.938 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:43.938 20:58:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.938 20:58:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.938 20:58:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.938 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:43.938 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:43.938 20:58:02 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.938 20:58:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.938 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:44.200 20:58:02 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.200 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:44.200 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:44.201 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "eec91a3b-987c-44c9-a6ea-b4db1f66d2d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eec91a3b-987c-44c9-a6ea-b4db1f66d2d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "dc72f408-c3da-4110-8acb-312ffb15b3ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc72f408-c3da-4110-8acb-312ffb15b3ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "e79e1402-cf97-40fa-844e-c6aabf2b5471"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e79e1402-cf97-40fa-844e-c6aabf2b5471",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "65f57441-f764-4bed-934b-a1a62461ab08"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "65f57441-f764-4bed-934b-a1a62461ab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "396c1868-2fb5-4772-ae5f-9e7554b420cc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "396c1868-2fb5-4772-ae5f-9e7554b420cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "55bc306f-6c8f-4bb4-982c-cb1d421a445f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "55bc306f-6c8f-4bb4-982c-cb1d421a445f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:44.201 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:44.201 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:15:44.201 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:44.201 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 83209 00:15:44.201 20:58:02 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83209 ']' 00:15:44.201 20:58:02 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83209 00:15:44.201 20:58:02 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:44.201 20:58:02 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.201 20:58:02 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83209 00:15:44.201 killing process with pid 83209 00:15:44.201 20:58:02 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.201 20:58:02 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.201 20:58:02 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83209' 00:15:44.201 20:58:02 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83209 00:15:44.201 20:58:02 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83209 00:15:44.462 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:44.462 20:58:02 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:44.462 20:58:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:44.462 20:58:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.462 20:58:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:44.463 ************************************ 00:15:44.463 START TEST bdev_hello_world 00:15:44.463 ************************************ 00:15:44.463 20:58:02 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:44.724 [2024-11-20 20:58:02.582103] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:44.724 [2024-11-20 20:58:02.582410] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83471 ] 00:15:44.724 [2024-11-20 20:58:02.728771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.724 [2024-11-20 20:58:02.765387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.985 [2024-11-20 20:58:03.021623] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:44.985 [2024-11-20 20:58:03.021800] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:44.985 [2024-11-20 20:58:03.021832] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:44.985 [2024-11-20 20:58:03.024380] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:44.985 [2024-11-20 20:58:03.025078] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:44.985 [2024-11-20 20:58:03.025275] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:44.986 [2024-11-20 20:58:03.025794] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:44.986 00:15:44.986 [2024-11-20 20:58:03.025837] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:45.247 ************************************ 00:15:45.247 END TEST bdev_hello_world 00:15:45.247 ************************************ 00:15:45.247 00:15:45.247 real 0m0.757s 00:15:45.247 user 0m0.390s 00:15:45.247 sys 0m0.225s 00:15:45.247 20:58:03 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.247 20:58:03 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:45.247 20:58:03 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:45.247 20:58:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:45.247 20:58:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.247 20:58:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:45.247 ************************************ 00:15:45.247 START TEST bdev_bounds 00:15:45.247 ************************************ 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83502 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:45.247 Process bdevio pid: 83502 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83502' 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83502 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83502 ']' 00:15:45.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:45.247 20:58:03 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:45.508 [2024-11-20 20:58:03.420989] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:45.508 [2024-11-20 20:58:03.421147] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83502 ] 00:15:45.508 [2024-11-20 20:58:03.571177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:45.508 [2024-11-20 20:58:03.614458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.508 [2024-11-20 20:58:03.614852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.508 [2024-11-20 20:58:03.614930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.455 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.455 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:46.455 20:58:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:46.455 I/O targets: 00:15:46.455 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:46.455 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:46.455 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:46.455 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:46.455 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:46.455 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:46.455 00:15:46.455 00:15:46.455 CUnit - A unit testing framework for C - Version 2.1-3 00:15:46.455 http://cunit.sourceforge.net/ 00:15:46.455 00:15:46.455 00:15:46.455 Suite: bdevio tests on: nvme3n1 00:15:46.455 Test: blockdev write read block ...passed 00:15:46.455 Test: blockdev write zeroes read block ...passed 00:15:46.455 Test: blockdev write zeroes read no split ...passed 00:15:46.455 Test: blockdev write zeroes read split ...passed 00:15:46.455 Test: blockdev write zeroes read split partial ...passed 00:15:46.455 Test: blockdev reset ...passed 00:15:46.455 Test: blockdev write read 8 blocks ...passed 00:15:46.455 Test: blockdev write read size > 128k ...passed 00:15:46.455 Test: blockdev write read invalid size ...passed 00:15:46.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:46.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:46.455 Test: blockdev write read max offset ...passed 00:15:46.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:46.455 Test: blockdev writev readv 8 blocks ...passed 00:15:46.455 Test: blockdev writev readv 30 x 1block ...passed 00:15:46.455 Test: blockdev writev readv block ...passed 00:15:46.455 Test: blockdev writev readv size > 128k ...passed 00:15:46.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:46.455 Test: blockdev comparev and writev ...passed 00:15:46.455 Test: blockdev nvme passthru rw ...passed 00:15:46.455 Test: blockdev nvme passthru vendor specific ...passed 00:15:46.455 Test: blockdev nvme admin passthru ...passed 00:15:46.455 Test: blockdev copy ...passed 00:15:46.455 Suite: bdevio tests on: nvme2n1 00:15:46.455 Test: blockdev write read block ...passed 00:15:46.455 Test: blockdev write zeroes read block ...passed 00:15:46.455 Test: blockdev write zeroes read no split ...passed 00:15:46.455 Test: blockdev write zeroes read split ...passed 00:15:46.455 Test: blockdev write zeroes read split partial ...passed 00:15:46.455 Test: blockdev reset ...passed 00:15:46.455 Test: blockdev write read 8 blocks ...passed 00:15:46.455 Test: blockdev write read size > 128k ...passed 00:15:46.455 Test: blockdev write read invalid size ...passed 00:15:46.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:46.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:46.455 Test: blockdev write read max offset ...passed 00:15:46.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:46.455 Test: blockdev writev readv 8 blocks ...passed 00:15:46.455 Test: blockdev writev readv 30 x 1block ...passed 00:15:46.455 Test: blockdev writev readv block ...passed 00:15:46.455 Test: blockdev writev readv size > 128k ...passed 00:15:46.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:46.455 Test: blockdev comparev and writev ...passed 00:15:46.455 Test: blockdev nvme passthru rw ...passed 00:15:46.455 Test: blockdev nvme passthru vendor specific ...passed 00:15:46.455 Test: blockdev nvme admin passthru ...passed 00:15:46.455 Test: blockdev copy ...passed 00:15:46.455 Suite: bdevio tests on: nvme1n1 00:15:46.455 Test: blockdev write read block ...passed 00:15:46.455 Test: blockdev write zeroes read block ...passed 00:15:46.455 Test: blockdev write zeroes read no split ...passed 00:15:46.455 Test: blockdev write zeroes read split ...passed 00:15:46.455 Test: blockdev write zeroes read split partial ...passed 00:15:46.455 Test: blockdev reset ...passed 00:15:46.455 Test: blockdev write read 8 blocks ...passed 00:15:46.455 Test: blockdev write read size > 128k ...passed 00:15:46.455 Test: blockdev write read invalid size ...passed 00:15:46.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:46.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:46.455 Test: blockdev write read max offset ...passed 00:15:46.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:46.455 Test: blockdev writev readv 8 blocks ...passed 00:15:46.455 Test: blockdev writev readv 30 x 1block ...passed 00:15:46.455 Test: blockdev writev readv block ...passed 00:15:46.455 Test: blockdev writev readv size > 128k ...passed 00:15:46.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:46.455 Test: blockdev comparev and writev ...passed 00:15:46.455 Test: blockdev nvme passthru rw ...passed 00:15:46.455 Test: blockdev nvme passthru vendor specific ...passed 00:15:46.455 Test: blockdev nvme admin passthru ...passed 00:15:46.455 Test: blockdev copy ...passed 00:15:46.455 Suite: bdevio tests on: nvme0n3 00:15:46.455 Test: blockdev write read block ...passed 00:15:46.455 Test: blockdev write zeroes read block ...passed 00:15:46.455 Test: blockdev write zeroes read no split ...passed 00:15:46.455 Test: blockdev write zeroes read split ...passed 00:15:46.455 Test: blockdev write zeroes read split partial ...passed 00:15:46.455 Test: blockdev reset ...passed 00:15:46.455 Test: blockdev write read 8 blocks ...passed 00:15:46.455 Test: blockdev write read size > 128k ...passed 00:15:46.455 Test: blockdev write read invalid size ...passed 00:15:46.455 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:46.455 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:46.455 Test: blockdev write read max offset ...passed 00:15:46.455 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:46.455 Test: blockdev writev readv 8 blocks ...passed 00:15:46.455 Test: blockdev writev readv 30 x 1block ...passed 00:15:46.455 Test: blockdev writev readv block ...passed 00:15:46.455 Test: blockdev writev readv size > 128k ...passed 00:15:46.455 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:46.455 Test: blockdev comparev and writev ...passed 00:15:46.455 Test: blockdev nvme passthru rw ...passed 00:15:46.455 Test: blockdev nvme passthru vendor specific ...passed 00:15:46.455 Test: blockdev nvme admin passthru ...passed 00:15:46.455 Test: blockdev copy ...passed 00:15:46.455 Suite: bdevio tests on: nvme0n2 00:15:46.718 Test: blockdev write read block ...passed 00:15:46.718 Test: blockdev write zeroes read block ...passed 00:15:46.718 Test: blockdev write zeroes read no split ...passed 00:15:46.718 Test: blockdev write zeroes read split ...passed 00:15:46.718 Test: blockdev write zeroes read split partial ...passed 00:15:46.718 Test: blockdev reset ...passed 00:15:46.718 Test: blockdev write read 8 blocks ...passed 00:15:46.718 Test: blockdev write read size > 128k ...passed 00:15:46.718 Test: blockdev write read invalid size ...passed 00:15:46.718 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:46.718 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:46.718 Test: blockdev write read max offset ...passed 00:15:46.718 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:46.718 Test: blockdev writev readv 8 blocks ...passed 00:15:46.718 Test: blockdev writev readv 30 x 1block ...passed 00:15:46.718 Test: blockdev writev readv block ...passed 00:15:46.718 Test: blockdev writev readv size > 128k ...passed 00:15:46.718 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:46.718 Test: blockdev comparev and writev ...passed 00:15:46.718 Test: blockdev nvme passthru rw ...passed 00:15:46.718 Test: blockdev nvme passthru vendor specific ...passed 00:15:46.718 Test: blockdev nvme admin passthru ...passed 00:15:46.718 Test: blockdev copy ...passed 00:15:46.718 Suite: bdevio tests on: nvme0n1 00:15:46.718 Test: blockdev write read block ...passed 00:15:46.718 Test: blockdev write zeroes read block ...passed 00:15:46.718 Test: blockdev write zeroes read no split ...passed 00:15:46.718 Test: blockdev write zeroes read split ...passed 00:15:46.718 Test: blockdev write zeroes read split partial ...passed 00:15:46.718 Test: blockdev reset ...passed 00:15:46.718 Test: blockdev write read 8 blocks ...passed 00:15:46.718 Test: blockdev write read size > 128k ...passed 00:15:46.718 Test: blockdev write read invalid size ...passed 00:15:46.718 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:46.718 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:46.718 Test: blockdev write read max offset ...passed 00:15:46.718 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:46.718 Test: blockdev writev readv 8 blocks ...passed 00:15:46.718 Test: blockdev writev readv 30 x 1block ...passed 00:15:46.718 Test: blockdev writev readv block ...passed 00:15:46.718 Test: blockdev writev readv size > 128k ...passed 00:15:46.718 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:46.718 Test: blockdev comparev and writev ...passed 00:15:46.718 Test: blockdev nvme passthru rw ...passed 00:15:46.718 Test: blockdev nvme passthru vendor specific ...passed 00:15:46.718 Test: blockdev nvme admin passthru ...passed 00:15:46.718 Test: blockdev copy ...passed 00:15:46.718 00:15:46.718 Run Summary: Type Total Ran Passed Failed Inactive 00:15:46.718 suites 6 6 n/a 0 0 00:15:46.718 tests 138 138 138 0 0 00:15:46.718 asserts 780 780 780 0 n/a 00:15:46.718 00:15:46.718 Elapsed time = 0.624 seconds 00:15:46.718 0 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83502 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83502 ']' 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83502 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83502 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83502' 00:15:46.718 killing process with pid 83502 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83502 00:15:46.718 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83502 00:15:46.980 20:58:04 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:46.980 00:15:46.980 real 0m1.610s 00:15:46.980 user 0m3.879s 00:15:46.980 sys 0m0.367s 00:15:46.980 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:46.980 ************************************ 00:15:46.980 20:58:04 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:46.980 END TEST bdev_bounds 00:15:46.980 ************************************ 00:15:46.980 20:58:05 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:46.980 20:58:05 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:46.980 20:58:05 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.980 20:58:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:46.980 ************************************ 00:15:46.980 START TEST bdev_nbd 00:15:46.980 ************************************ 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83555 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83555 /var/tmp/spdk-nbd.sock 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83555 ']' 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:46.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.980 20:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:47.241 [2024-11-20 20:58:05.107900] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:15:47.241 [2024-11-20 20:58:05.108205] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.241 [2024-11-20 20:58:05.251228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.241 [2024-11-20 20:58:05.290740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:48.185 20:58:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:48.185 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:48.185 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:48.185 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:48.185 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.186 1+0 records in 00:15:48.186 1+0 records out 00:15:48.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107578 s, 3.8 MB/s 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:48.186 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.447 1+0 records in 00:15:48.447 1+0 records out 00:15:48.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000905842 s, 4.5 MB/s 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:48.447 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.709 1+0 records in 00:15:48.709 1+0 records out 00:15:48.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000900951 s, 4.5 MB/s 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:48.709 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:48.971 1+0 records in 00:15:48.971 1+0 records out 00:15:48.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135771 s, 3.0 MB/s 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:48.971 20:58:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:48.971 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:48.971 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.232 1+0 records in 00:15:49.232 1+0 records out 00:15:49.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000986803 s, 4.2 MB/s 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:49.232 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.494 1+0 records in 00:15:49.494 1+0 records out 00:15:49.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102166 s, 4.0 MB/s 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:49.494 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd0", 00:15:49.756 "bdev_name": "nvme0n1" 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd1", 00:15:49.756 "bdev_name": "nvme0n2" 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd2", 00:15:49.756 "bdev_name": "nvme0n3" 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd3", 00:15:49.756 "bdev_name": "nvme1n1" 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd4", 00:15:49.756 "bdev_name": "nvme2n1" 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd5", 00:15:49.756 "bdev_name": "nvme3n1" 00:15:49.756 } 00:15:49.756 ]' 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd0", 00:15:49.756 "bdev_name": "nvme0n1" 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd1", 00:15:49.756 "bdev_name": "nvme0n2" 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd2", 00:15:49.756 "bdev_name": "nvme0n3" 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd3", 00:15:49.756 "bdev_name": "nvme1n1" 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd4", 00:15:49.756 "bdev_name": "nvme2n1" 00:15:49.756 }, 00:15:49.756 { 00:15:49.756 "nbd_device": "/dev/nbd5", 00:15:49.756 "bdev_name": "nvme3n1" 00:15:49.756 } 00:15:49.756 ]' 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.756 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:50.017 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:50.017 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:50.017 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:50.017 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.017 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.017 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:50.017 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.017 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.017 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.017 20:58:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:50.278 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:50.278 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:50.278 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:50.278 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.278 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.278 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:50.278 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.278 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.278 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.278 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:50.540 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.541 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.541 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.541 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:50.802 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:50.802 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:50.802 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:50.802 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:50.802 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:50.802 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:50.802 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:50.802 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:50.802 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:50.802 20:58:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.061 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:51.318 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:51.318 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:51.318 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:51.318 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:51.318 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:51.318 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:51.318 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:51.318 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:51.319 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:51.578 /dev/nbd0 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.578 1+0 records in 00:15:51.578 1+0 records out 00:15:51.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000991432 s, 4.1 MB/s 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:51.578 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:51.838 /dev/nbd1 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.838 1+0 records in 00:15:51.838 1+0 records out 00:15:51.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104408 s, 3.9 MB/s 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:51.838 20:58:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:52.099 /dev/nbd10 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.099 1+0 records in 00:15:52.099 1+0 records out 00:15:52.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110527 s, 3.7 MB/s 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:52.099 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:52.360 /dev/nbd11 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.360 1+0 records in 00:15:52.360 1+0 records out 00:15:52.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00149264 s, 2.7 MB/s 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:52.360 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:52.621 /dev/nbd12 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.621 1+0 records in 00:15:52.621 1+0 records out 00:15:52.621 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105348 s, 3.9 MB/s 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:52.621 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:52.882 /dev/nbd13 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:52.882 1+0 records in 00:15:52.882 1+0 records out 00:15:52.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109905 s, 3.7 MB/s 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:52.882 20:58:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd0", 00:15:53.144 "bdev_name": "nvme0n1" 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd1", 00:15:53.144 "bdev_name": "nvme0n2" 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd10", 00:15:53.144 "bdev_name": "nvme0n3" 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd11", 00:15:53.144 "bdev_name": "nvme1n1" 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd12", 00:15:53.144 "bdev_name": "nvme2n1" 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd13", 00:15:53.144 "bdev_name": "nvme3n1" 00:15:53.144 } 00:15:53.144 ]' 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd0", 00:15:53.144 "bdev_name": "nvme0n1" 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd1", 00:15:53.144 "bdev_name": "nvme0n2" 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd10", 00:15:53.144 "bdev_name": "nvme0n3" 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd11", 00:15:53.144 "bdev_name": "nvme1n1" 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd12", 00:15:53.144 "bdev_name": "nvme2n1" 00:15:53.144 }, 00:15:53.144 { 00:15:53.144 "nbd_device": "/dev/nbd13", 00:15:53.144 "bdev_name": "nvme3n1" 00:15:53.144 } 00:15:53.144 ]' 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:53.144 /dev/nbd1 00:15:53.144 /dev/nbd10 00:15:53.144 /dev/nbd11 00:15:53.144 /dev/nbd12 00:15:53.144 /dev/nbd13' 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:53.144 /dev/nbd1 00:15:53.144 /dev/nbd10 00:15:53.144 /dev/nbd11 00:15:53.144 /dev/nbd12 00:15:53.144 /dev/nbd13' 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:53.144 256+0 records in 00:15:53.144 256+0 records out 00:15:53.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00720293 s, 146 MB/s 00:15:53.144 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:53.145 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:53.405 256+0 records in 00:15:53.405 256+0 records out 00:15:53.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244033 s, 4.3 MB/s 00:15:53.405 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:53.405 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:53.666 256+0 records in 00:15:53.666 256+0 records out 00:15:53.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242707 s, 4.3 MB/s 00:15:53.666 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:53.666 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:53.928 256+0 records in 00:15:53.928 256+0 records out 00:15:53.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248474 s, 4.2 MB/s 00:15:53.928 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:53.928 20:58:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:54.187 256+0 records in 00:15:54.187 256+0 records out 00:15:54.187 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.27607 s, 3.8 MB/s 00:15:54.187 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:54.187 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:54.447 256+0 records in 00:15:54.447 256+0 records out 00:15:54.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.218752 s, 4.8 MB/s 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:54.447 256+0 records in 00:15:54.447 256+0 records out 00:15:54.447 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227662 s, 4.6 MB/s 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:54.447 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:54.708 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:54.709 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:54.970 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:54.971 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:54.971 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.971 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.971 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:54.971 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:54.971 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.971 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.971 20:58:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:54.971 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:54.971 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:54.971 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:54.971 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:54.971 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:54.971 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:54.971 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:54.971 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:54.971 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:54.971 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:55.228 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:55.228 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:55.228 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:55.228 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.228 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.228 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:55.228 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:55.228 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.228 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.228 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:55.486 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:55.486 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:55.486 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:55.486 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.486 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.486 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:55.486 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:55.486 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.486 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.487 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:55.744 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:55.744 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:55.744 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:55.744 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:55.744 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:55.744 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:55.744 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:55.744 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:55.744 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:55.744 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.002 20:58:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:56.002 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:56.002 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:56.002 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:56.260 malloc_lvol_verify 00:15:56.260 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:56.518 543d86c9-52c7-4527-8033-c896cc5cc4c1 00:15:56.518 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:56.775 806c877d-671d-41f5-96c0-c3f6f013790b 00:15:56.775 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:57.033 /dev/nbd0 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:57.033 mke2fs 1.47.0 (5-Feb-2023) 00:15:57.033 Discarding device blocks: 0/4096 done 00:15:57.033 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:57.033 00:15:57.033 Allocating group tables: 0/1 done 00:15:57.033 Writing inode tables: 0/1 done 00:15:57.033 Creating journal (1024 blocks): done 00:15:57.033 Writing superblocks and filesystem accounting information: 0/1 done 00:15:57.033 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.033 20:58:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83555 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83555 ']' 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83555 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83555 00:15:57.292 killing process with pid 83555 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83555' 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83555 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83555 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:57.292 00:15:57.292 real 0m10.368s 00:15:57.292 user 0m13.930s 00:15:57.292 sys 0m3.897s 00:15:57.292 ************************************ 00:15:57.292 END TEST bdev_nbd 00:15:57.292 ************************************ 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.292 20:58:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:57.552 20:58:15 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:57.552 20:58:15 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:15:57.552 20:58:15 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:15:57.552 20:58:15 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:15:57.552 20:58:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.552 20:58:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.552 20:58:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.552 ************************************ 00:15:57.552 START TEST bdev_fio 00:15:57.552 ************************************ 00:15:57.552 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:57.552 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:57.553 ************************************ 00:15:57.553 START TEST bdev_fio_rw_verify 00:15:57.553 ************************************ 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:57.553 20:58:15 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:57.812 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:57.812 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:57.812 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:57.812 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:57.812 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:57.812 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:57.812 fio-3.35 00:15:57.812 Starting 6 threads 00:16:10.111 00:16:10.111 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83957: Wed Nov 20 20:58:26 2024 00:16:10.111 read: IOPS=14.5k, BW=56.6MiB/s (59.4MB/s)(566MiB/10003msec) 00:16:10.111 slat (usec): min=2, max=2876, avg= 6.60, stdev=16.56 00:16:10.111 clat (usec): min=106, max=6437, avg=1339.02, stdev=718.43 00:16:10.111 lat (usec): min=109, max=6732, avg=1345.62, stdev=719.14 00:16:10.111 clat percentiles (usec): 00:16:10.111 | 50.000th=[ 1254], 99.000th=[ 3589], 99.900th=[ 5014], 99.990th=[ 6325], 00:16:10.111 | 99.999th=[ 6456] 00:16:10.111 write: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(577MiB/10003msec); 0 zone resets 00:16:10.111 slat (usec): min=12, max=4315, avg=41.09, stdev=139.31 00:16:10.111 clat (usec): min=82, max=8031, avg=1610.60, stdev=788.70 00:16:10.111 lat (usec): min=95, max=8052, avg=1651.70, stdev=801.32 00:16:10.111 clat percentiles (usec): 00:16:10.111 | 50.000th=[ 1483], 99.000th=[ 4047], 99.900th=[ 5407], 99.990th=[ 6718], 00:16:10.111 | 99.999th=[ 7504] 00:16:10.111 bw ( KiB/s): min=48835, max=80826, per=100.00%, avg=59487.16, stdev=1695.34, samples=114 00:16:10.111 iops : min=12205, max=20206, avg=14870.95, stdev=423.91, samples=114 00:16:10.111 lat (usec) : 100=0.01%, 250=1.44%, 500=5.31%, 750=8.98%, 1000=12.23% 00:16:10.111 lat (msec) : 2=51.57%, 4=19.69%, 10=0.77% 00:16:10.111 cpu : usr=42.58%, sys=32.93%, ctx=5385, majf=0, minf=16511 00:16:10.111 IO depths : 1=11.3%, 2=23.7%, 4=51.2%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.111 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.111 issued rwts: total=145000,147701,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.111 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:10.111 00:16:10.111 Run status group 0 (all jobs): 00:16:10.111 READ: bw=56.6MiB/s (59.4MB/s), 56.6MiB/s-56.6MiB/s (59.4MB/s-59.4MB/s), io=566MiB (594MB), run=10003-10003msec 00:16:10.111 WRITE: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=577MiB (605MB), run=10003-10003msec 00:16:10.111 ----------------------------------------------------- 00:16:10.111 Suppressions used: 00:16:10.111 count bytes template 00:16:10.111 6 48 /usr/src/fio/parse.c 00:16:10.111 2608 250368 /usr/src/fio/iolog.c 00:16:10.111 1 8 libtcmalloc_minimal.so 00:16:10.111 1 904 libcrypto.so 00:16:10.111 ----------------------------------------------------- 00:16:10.111 00:16:10.111 ************************************ 00:16:10.111 END TEST bdev_fio_rw_verify 00:16:10.111 ************************************ 00:16:10.111 00:16:10.111 real 0m11.133s 00:16:10.111 user 0m26.268s 00:16:10.111 sys 0m20.040s 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:10.111 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "eec91a3b-987c-44c9-a6ea-b4db1f66d2d1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eec91a3b-987c-44c9-a6ea-b4db1f66d2d1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "dc72f408-c3da-4110-8acb-312ffb15b3ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc72f408-c3da-4110-8acb-312ffb15b3ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "e79e1402-cf97-40fa-844e-c6aabf2b5471"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e79e1402-cf97-40fa-844e-c6aabf2b5471",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "65f57441-f764-4bed-934b-a1a62461ab08"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "65f57441-f764-4bed-934b-a1a62461ab08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "396c1868-2fb5-4772-ae5f-9e7554b420cc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "396c1868-2fb5-4772-ae5f-9e7554b420cc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "55bc306f-6c8f-4bb4-982c-cb1d421a445f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "55bc306f-6c8f-4bb4-982c-cb1d421a445f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:10.112 /home/vagrant/spdk_repo/spdk 00:16:10.112 ************************************ 00:16:10.112 END TEST bdev_fio 00:16:10.112 ************************************ 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:10.112 00:16:10.112 real 0m11.315s 00:16:10.112 user 0m26.344s 00:16:10.112 sys 0m20.122s 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.112 20:58:26 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:10.112 20:58:26 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:10.112 20:58:26 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:10.112 20:58:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:10.112 20:58:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.112 20:58:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.112 ************************************ 00:16:10.112 START TEST bdev_verify 00:16:10.112 ************************************ 00:16:10.112 20:58:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:10.112 [2024-11-20 20:58:26.912641] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:10.112 [2024-11-20 20:58:26.912807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84124 ] 00:16:10.112 [2024-11-20 20:58:27.061905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:10.112 [2024-11-20 20:58:27.103094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.112 [2024-11-20 20:58:27.103155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.112 Running I/O for 5 seconds... 00:16:11.629 24000.00 IOPS, 93.75 MiB/s [2024-11-20T20:58:30.691Z] 24336.00 IOPS, 95.06 MiB/s [2024-11-20T20:58:32.078Z] 24288.00 IOPS, 94.88 MiB/s [2024-11-20T20:58:32.651Z] 24136.00 IOPS, 94.28 MiB/s [2024-11-20T20:58:32.651Z] 23718.40 IOPS, 92.65 MiB/s 00:16:14.532 Latency(us) 00:16:14.532 [2024-11-20T20:58:32.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.532 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:14.532 Verification LBA range: start 0x0 length 0x80000 00:16:14.533 nvme0n1 : 5.05 1953.60 7.63 0.00 0.00 65400.37 8368.44 115343.36 00:16:14.533 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0x80000 length 0x80000 00:16:14.533 nvme0n1 : 5.05 1697.15 6.63 0.00 0.00 75268.93 10989.88 78643.20 00:16:14.533 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0x0 length 0x80000 00:16:14.533 nvme0n2 : 5.05 1952.87 7.63 0.00 0.00 65321.80 11292.36 98404.82 00:16:14.533 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0x80000 length 0x80000 00:16:14.533 nvme0n2 : 5.05 1696.65 6.63 0.00 0.00 75127.61 9981.64 72997.02 00:16:14.533 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0x0 length 0x80000 00:16:14.533 nvme0n3 : 5.06 1945.93 7.60 0.00 0.00 65448.05 11645.24 80256.39 00:16:14.533 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0x80000 length 0x80000 00:16:14.533 nvme0n3 : 5.07 1715.90 6.70 0.00 0.00 74106.54 8771.74 77030.01 00:16:14.533 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0x0 length 0xbd0bd 00:16:14.533 nvme1n1 : 5.07 2687.85 10.50 0.00 0.00 47238.27 4537.11 55655.19 00:16:14.533 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:14.533 nvme1n1 : 5.06 2429.76 9.49 0.00 0.00 52120.37 5217.67 67350.84 00:16:14.533 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0x0 length 0xa0000 00:16:14.533 nvme2n1 : 5.06 1974.20 7.71 0.00 0.00 64322.19 9527.93 102437.81 00:16:14.533 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0xa0000 length 0xa0000 00:16:14.533 nvme2n1 : 5.07 1740.47 6.80 0.00 0.00 72547.13 9326.28 67350.84 00:16:14.533 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0x0 length 0x20000 00:16:14.533 nvme3n1 : 5.06 1921.41 7.51 0.00 0.00 65979.05 4385.87 114536.76 00:16:14.533 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:14.533 Verification LBA range: start 0x20000 length 0x20000 00:16:14.533 nvme3n1 : 5.08 1714.71 6.70 0.00 0.00 73517.93 4587.52 71787.13 00:16:14.533 [2024-11-20T20:58:32.652Z] =================================================================================================================== 00:16:14.533 [2024-11-20T20:58:32.652Z] Total : 23430.51 91.53 0.00 0.00 65074.86 4385.87 115343.36 00:16:14.794 ************************************ 00:16:14.794 END TEST bdev_verify 00:16:14.794 ************************************ 00:16:14.794 00:16:14.794 real 0m5.970s 00:16:14.794 user 0m9.449s 00:16:14.794 sys 0m1.570s 00:16:14.794 20:58:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.794 20:58:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:14.794 20:58:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:14.794 20:58:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:14.794 20:58:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.794 20:58:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:14.794 ************************************ 00:16:14.794 START TEST bdev_verify_big_io 00:16:14.794 ************************************ 00:16:14.794 20:58:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:15.057 [2024-11-20 20:58:32.954641] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:15.057 [2024-11-20 20:58:32.954798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84222 ] 00:16:15.057 [2024-11-20 20:58:33.102327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:15.057 [2024-11-20 20:58:33.141907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.057 [2024-11-20 20:58:33.141939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.630 Running I/O for 5 seconds... 00:16:21.477 1088.00 IOPS, 68.00 MiB/s [2024-11-20T20:58:39.596Z] 2532.00 IOPS, 158.25 MiB/s [2024-11-20T20:58:39.855Z] 2877.33 IOPS, 179.83 MiB/s 00:16:21.736 Latency(us) 00:16:21.736 [2024-11-20T20:58:39.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.736 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.736 Verification LBA range: start 0x0 length 0x8000 00:16:21.736 nvme0n1 : 5.53 130.09 8.13 0.00 0.00 952719.80 243592.27 922746.88 00:16:21.736 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.736 Verification LBA range: start 0x8000 length 0x8000 00:16:21.736 nvme0n1 : 5.87 103.60 6.48 0.00 0.00 1159505.17 7763.50 1471232.79 00:16:21.736 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.736 Verification LBA range: start 0x0 length 0x8000 00:16:21.736 nvme0n2 : 5.73 150.86 9.43 0.00 0.00 795478.40 5973.86 1129235.69 00:16:21.736 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.736 Verification LBA range: start 0x8000 length 0x8000 00:16:21.736 nvme0n2 : 5.87 87.21 5.45 0.00 0.00 1310156.01 282308.92 1542213.32 00:16:21.736 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.736 Verification LBA range: start 0x0 length 0x8000 00:16:21.736 nvme0n3 : 5.54 138.66 8.67 0.00 0.00 851696.25 74610.22 1342177.28 00:16:21.736 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.736 Verification LBA range: start 0x8000 length 0x8000 00:16:21.736 nvme0n3 : 5.95 122.28 7.64 0.00 0.00 925359.46 33675.42 1497043.89 00:16:21.736 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.736 Verification LBA range: start 0x0 length 0xbd0b 00:16:21.736 nvme1n1 : 5.65 189.61 11.85 0.00 0.00 601847.05 28230.89 1174405.12 00:16:21.736 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.736 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:21.736 nvme1n1 : 5.94 118.52 7.41 0.00 0.00 907022.61 20064.10 1755154.90 00:16:21.736 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.736 Verification LBA range: start 0x0 length 0xa000 00:16:21.736 nvme2n1 : 5.83 139.89 8.74 0.00 0.00 795222.83 7813.91 903388.55 00:16:21.736 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.736 Verification LBA range: start 0xa000 length 0xa000 00:16:21.736 nvme2n1 : 6.04 135.16 8.45 0.00 0.00 767778.04 1102.77 2606921.26 00:16:21.736 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:21.737 Verification LBA range: start 0x0 length 0x2000 00:16:21.737 nvme3n1 : 5.84 164.35 10.27 0.00 0.00 664490.18 1077.56 583976.17 00:16:21.737 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:21.737 Verification LBA range: start 0x2000 length 0x2000 00:16:21.737 nvme3n1 : 6.26 282.56 17.66 0.00 0.00 354617.43 357.61 2322999.14 00:16:21.737 [2024-11-20T20:58:39.856Z] =================================================================================================================== 00:16:21.737 [2024-11-20T20:58:39.856Z] Total : 1762.79 110.17 0.00 0.00 764396.47 357.61 2606921.26 00:16:21.997 00:16:21.997 real 0m7.045s 00:16:21.997 user 0m12.964s 00:16:21.997 sys 0m0.467s 00:16:21.997 ************************************ 00:16:21.997 END TEST bdev_verify_big_io 00:16:21.997 ************************************ 00:16:21.997 20:58:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.997 20:58:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.997 20:58:39 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:21.997 20:58:39 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:21.997 20:58:39 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.997 20:58:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:21.997 ************************************ 00:16:21.997 START TEST bdev_write_zeroes 00:16:21.997 ************************************ 00:16:21.997 20:58:39 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:21.997 [2024-11-20 20:58:40.038073] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:21.997 [2024-11-20 20:58:40.038166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84323 ] 00:16:22.258 [2024-11-20 20:58:40.180608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.258 [2024-11-20 20:58:40.201941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.519 Running I/O for 1 seconds... 00:16:23.462 84480.00 IOPS, 330.00 MiB/s 00:16:23.462 Latency(us) 00:16:23.462 [2024-11-20T20:58:41.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.462 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:23.462 nvme0n1 : 1.02 13826.82 54.01 0.00 0.00 9247.43 6251.13 20769.87 00:16:23.462 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:23.462 nvme0n2 : 1.03 13824.29 54.00 0.00 0.00 9241.57 5444.53 18955.03 00:16:23.462 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:23.462 nvme0n3 : 1.02 13809.34 53.94 0.00 0.00 9243.88 6301.54 20971.52 00:16:23.462 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:23.462 nvme1n1 : 1.03 14731.64 57.55 0.00 0.00 8658.24 5595.77 16736.89 00:16:23.462 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:23.462 nvme2n1 : 1.03 13684.29 53.45 0.00 0.00 9248.35 3932.16 19055.85 00:16:23.462 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:23.462 nvme3n1 : 1.03 13715.82 53.58 0.00 0.00 9219.47 3982.57 21979.77 00:16:23.462 [2024-11-20T20:58:41.581Z] =================================================================================================================== 00:16:23.462 [2024-11-20T20:58:41.581Z] Total : 83592.20 326.53 0.00 0.00 9137.51 3932.16 21979.77 00:16:23.724 00:16:23.724 real 0m1.628s 00:16:23.724 user 0m0.996s 00:16:23.724 sys 0m0.449s 00:16:23.724 ************************************ 00:16:23.724 END TEST bdev_write_zeroes 00:16:23.724 ************************************ 00:16:23.724 20:58:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.724 20:58:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:23.724 20:58:41 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:23.724 20:58:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:23.724 20:58:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.724 20:58:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.724 ************************************ 00:16:23.724 START TEST bdev_json_nonenclosed 00:16:23.724 ************************************ 00:16:23.724 20:58:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:23.724 [2024-11-20 20:58:41.726021] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:23.724 [2024-11-20 20:58:41.726137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84364 ] 00:16:23.985 [2024-11-20 20:58:41.872721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.985 [2024-11-20 20:58:41.901549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.985 [2024-11-20 20:58:41.901656] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:23.985 [2024-11-20 20:58:41.901673] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:23.985 [2024-11-20 20:58:41.901685] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:23.985 ************************************ 00:16:23.985 END TEST bdev_json_nonenclosed 00:16:23.985 ************************************ 00:16:23.985 00:16:23.985 real 0m0.308s 00:16:23.985 user 0m0.118s 00:16:23.985 sys 0m0.086s 00:16:23.985 20:58:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.985 20:58:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:23.985 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:23.985 20:58:42 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:23.985 20:58:42 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.985 20:58:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.985 ************************************ 00:16:23.985 START TEST bdev_json_nonarray 00:16:23.985 ************************************ 00:16:23.985 20:58:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:24.246 [2024-11-20 20:58:42.107043] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:24.246 [2024-11-20 20:58:42.107173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84385 ] 00:16:24.246 [2024-11-20 20:58:42.256241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.246 [2024-11-20 20:58:42.299543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.246 [2024-11-20 20:58:42.299657] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:24.246 [2024-11-20 20:58:42.299675] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:24.246 [2024-11-20 20:58:42.299687] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:24.507 00:16:24.507 real 0m0.338s 00:16:24.507 user 0m0.146s 00:16:24.507 sys 0m0.087s 00:16:24.507 20:58:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:24.507 20:58:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:24.507 ************************************ 00:16:24.507 END TEST bdev_json_nonarray 00:16:24.507 ************************************ 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:24.507 20:58:42 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:25.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:30.371 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:30.371 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:30.943 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:30.943 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:30.943 00:16:30.943 real 0m49.132s 00:16:30.943 user 1m12.054s 00:16:30.943 sys 0m37.930s 00:16:30.943 20:58:48 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.943 ************************************ 00:16:30.943 END TEST blockdev_xnvme 00:16:30.943 ************************************ 00:16:30.943 20:58:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.943 20:58:48 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:30.943 20:58:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:30.943 20:58:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.943 20:58:48 -- common/autotest_common.sh@10 -- # set +x 00:16:30.943 ************************************ 00:16:30.943 START TEST ublk 00:16:30.943 ************************************ 00:16:30.944 20:58:48 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:30.944 * Looking for test storage... 00:16:30.944 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:30.944 20:58:49 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:30.944 20:58:49 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:30.944 20:58:49 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:31.205 20:58:49 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:31.205 20:58:49 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:31.205 20:58:49 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:31.205 20:58:49 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:31.205 20:58:49 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:31.205 20:58:49 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:31.205 20:58:49 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:31.205 20:58:49 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:31.205 20:58:49 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:31.205 20:58:49 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:31.205 20:58:49 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:31.205 20:58:49 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:31.205 20:58:49 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:31.205 20:58:49 ublk -- scripts/common.sh@345 -- # : 1 00:16:31.205 20:58:49 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:31.205 20:58:49 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:31.205 20:58:49 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:31.205 20:58:49 ublk -- scripts/common.sh@353 -- # local d=1 00:16:31.205 20:58:49 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:31.205 20:58:49 ublk -- scripts/common.sh@355 -- # echo 1 00:16:31.205 20:58:49 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:31.205 20:58:49 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:31.205 20:58:49 ublk -- scripts/common.sh@353 -- # local d=2 00:16:31.205 20:58:49 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:31.205 20:58:49 ublk -- scripts/common.sh@355 -- # echo 2 00:16:31.205 20:58:49 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:31.205 20:58:49 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:31.205 20:58:49 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:31.205 20:58:49 ublk -- scripts/common.sh@368 -- # return 0 00:16:31.205 20:58:49 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:31.205 20:58:49 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:31.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.205 --rc genhtml_branch_coverage=1 00:16:31.205 --rc genhtml_function_coverage=1 00:16:31.205 --rc genhtml_legend=1 00:16:31.205 --rc geninfo_all_blocks=1 00:16:31.205 --rc geninfo_unexecuted_blocks=1 00:16:31.205 00:16:31.205 ' 00:16:31.205 20:58:49 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:31.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.205 --rc genhtml_branch_coverage=1 00:16:31.205 --rc genhtml_function_coverage=1 00:16:31.205 --rc genhtml_legend=1 00:16:31.205 --rc geninfo_all_blocks=1 00:16:31.205 --rc geninfo_unexecuted_blocks=1 00:16:31.205 00:16:31.205 ' 00:16:31.205 20:58:49 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:31.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.205 --rc genhtml_branch_coverage=1 00:16:31.205 --rc genhtml_function_coverage=1 00:16:31.205 --rc genhtml_legend=1 00:16:31.205 --rc geninfo_all_blocks=1 00:16:31.205 --rc geninfo_unexecuted_blocks=1 00:16:31.205 00:16:31.205 ' 00:16:31.205 20:58:49 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:31.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:31.205 --rc genhtml_branch_coverage=1 00:16:31.205 --rc genhtml_function_coverage=1 00:16:31.205 --rc genhtml_legend=1 00:16:31.205 --rc geninfo_all_blocks=1 00:16:31.205 --rc geninfo_unexecuted_blocks=1 00:16:31.205 00:16:31.205 ' 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:31.205 20:58:49 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:31.205 20:58:49 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:31.205 20:58:49 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:31.205 20:58:49 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:31.205 20:58:49 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:31.205 20:58:49 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:31.205 20:58:49 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:31.205 20:58:49 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:31.205 20:58:49 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:31.205 20:58:49 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:31.205 20:58:49 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.205 20:58:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:31.205 ************************************ 00:16:31.205 START TEST test_save_ublk_config 00:16:31.205 ************************************ 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=84683 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 84683 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84683 ']' 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:31.205 20:58:49 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:31.205 [2024-11-20 20:58:49.204496] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:31.205 [2024-11-20 20:58:49.204649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84683 ] 00:16:31.467 [2024-11-20 20:58:49.352942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.467 [2024-11-20 20:58:49.381491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.039 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.039 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:32.039 20:58:50 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:32.039 20:58:50 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:32.039 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.039 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:32.039 [2024-11-20 20:58:50.055770] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:32.039 [2024-11-20 20:58:50.056737] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:32.039 malloc0 00:16:32.039 [2024-11-20 20:58:50.087929] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:32.039 [2024-11-20 20:58:50.088031] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:32.039 [2024-11-20 20:58:50.088043] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:32.039 [2024-11-20 20:58:50.088059] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:32.039 [2024-11-20 20:58:50.096877] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:32.039 [2024-11-20 20:58:50.096917] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:32.039 [2024-11-20 20:58:50.102801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:32.039 [2024-11-20 20:58:50.102938] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:32.039 [2024-11-20 20:58:50.120778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:32.039 0 00:16:32.039 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.039 20:58:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:32.039 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.039 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:32.301 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.301 20:58:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:32.301 "subsystems": [ 00:16:32.301 { 00:16:32.301 "subsystem": "fsdev", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "fsdev_set_opts", 00:16:32.301 "params": { 00:16:32.301 "fsdev_io_pool_size": 65535, 00:16:32.301 "fsdev_io_cache_size": 256 00:16:32.301 } 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "keyring", 00:16:32.301 "config": [] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "iobuf", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "iobuf_set_options", 00:16:32.301 "params": { 00:16:32.301 "small_pool_count": 8192, 00:16:32.301 "large_pool_count": 1024, 00:16:32.301 "small_bufsize": 8192, 00:16:32.301 "large_bufsize": 135168, 00:16:32.301 "enable_numa": false 00:16:32.301 } 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "sock", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "sock_set_default_impl", 00:16:32.301 "params": { 00:16:32.301 "impl_name": "posix" 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "sock_impl_set_options", 00:16:32.301 "params": { 00:16:32.301 "impl_name": "ssl", 00:16:32.301 "recv_buf_size": 4096, 00:16:32.301 "send_buf_size": 4096, 00:16:32.301 "enable_recv_pipe": true, 00:16:32.301 "enable_quickack": false, 00:16:32.301 "enable_placement_id": 0, 00:16:32.301 "enable_zerocopy_send_server": true, 00:16:32.301 "enable_zerocopy_send_client": false, 00:16:32.301 "zerocopy_threshold": 0, 00:16:32.301 "tls_version": 0, 00:16:32.301 "enable_ktls": false 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "sock_impl_set_options", 00:16:32.301 "params": { 00:16:32.301 "impl_name": "posix", 00:16:32.301 "recv_buf_size": 2097152, 00:16:32.301 "send_buf_size": 2097152, 00:16:32.301 "enable_recv_pipe": true, 00:16:32.301 "enable_quickack": false, 00:16:32.301 "enable_placement_id": 0, 00:16:32.301 "enable_zerocopy_send_server": true, 00:16:32.301 "enable_zerocopy_send_client": false, 00:16:32.301 "zerocopy_threshold": 0, 00:16:32.301 "tls_version": 0, 00:16:32.301 "enable_ktls": false 00:16:32.301 } 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "vmd", 00:16:32.301 "config": [] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "accel", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "accel_set_options", 00:16:32.301 "params": { 00:16:32.301 "small_cache_size": 128, 00:16:32.301 "large_cache_size": 16, 00:16:32.301 "task_count": 2048, 00:16:32.301 "sequence_count": 2048, 00:16:32.301 "buf_count": 2048 00:16:32.301 } 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "bdev", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "bdev_set_options", 00:16:32.301 "params": { 00:16:32.301 "bdev_io_pool_size": 65535, 00:16:32.301 "bdev_io_cache_size": 256, 00:16:32.301 "bdev_auto_examine": true, 00:16:32.301 "iobuf_small_cache_size": 128, 00:16:32.301 "iobuf_large_cache_size": 16 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "bdev_raid_set_options", 00:16:32.301 "params": { 00:16:32.301 "process_window_size_kb": 1024, 00:16:32.301 "process_max_bandwidth_mb_sec": 0 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "bdev_iscsi_set_options", 00:16:32.301 "params": { 00:16:32.301 "timeout_sec": 30 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "bdev_nvme_set_options", 00:16:32.301 "params": { 00:16:32.301 "action_on_timeout": "none", 00:16:32.301 "timeout_us": 0, 00:16:32.301 "timeout_admin_us": 0, 00:16:32.301 "keep_alive_timeout_ms": 10000, 00:16:32.301 "arbitration_burst": 0, 00:16:32.301 "low_priority_weight": 0, 00:16:32.301 "medium_priority_weight": 0, 00:16:32.301 "high_priority_weight": 0, 00:16:32.301 "nvme_adminq_poll_period_us": 10000, 00:16:32.301 "nvme_ioq_poll_period_us": 0, 00:16:32.301 "io_queue_requests": 0, 00:16:32.301 "delay_cmd_submit": true, 00:16:32.301 "transport_retry_count": 4, 00:16:32.301 "bdev_retry_count": 3, 00:16:32.301 "transport_ack_timeout": 0, 00:16:32.301 "ctrlr_loss_timeout_sec": 0, 00:16:32.301 "reconnect_delay_sec": 0, 00:16:32.301 "fast_io_fail_timeout_sec": 0, 00:16:32.301 "disable_auto_failback": false, 00:16:32.301 "generate_uuids": false, 00:16:32.301 "transport_tos": 0, 00:16:32.301 "nvme_error_stat": false, 00:16:32.301 "rdma_srq_size": 0, 00:16:32.301 "io_path_stat": false, 00:16:32.301 "allow_accel_sequence": false, 00:16:32.301 "rdma_max_cq_size": 0, 00:16:32.301 "rdma_cm_event_timeout_ms": 0, 00:16:32.301 "dhchap_digests": [ 00:16:32.301 "sha256", 00:16:32.301 "sha384", 00:16:32.301 "sha512" 00:16:32.301 ], 00:16:32.301 "dhchap_dhgroups": [ 00:16:32.301 "null", 00:16:32.301 "ffdhe2048", 00:16:32.301 "ffdhe3072", 00:16:32.301 "ffdhe4096", 00:16:32.301 "ffdhe6144", 00:16:32.301 "ffdhe8192" 00:16:32.301 ] 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "bdev_nvme_set_hotplug", 00:16:32.301 "params": { 00:16:32.301 "period_us": 100000, 00:16:32.301 "enable": false 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "bdev_malloc_create", 00:16:32.301 "params": { 00:16:32.301 "name": "malloc0", 00:16:32.301 "num_blocks": 8192, 00:16:32.301 "block_size": 4096, 00:16:32.301 "physical_block_size": 4096, 00:16:32.301 "uuid": "c14f487d-09d4-4dea-8f22-0f5df003f610", 00:16:32.301 "optimal_io_boundary": 0, 00:16:32.301 "md_size": 0, 00:16:32.301 "dif_type": 0, 00:16:32.301 "dif_is_head_of_md": false, 00:16:32.301 "dif_pi_format": 0 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "bdev_wait_for_examine" 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "scsi", 00:16:32.301 "config": null 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "scheduler", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "framework_set_scheduler", 00:16:32.301 "params": { 00:16:32.301 "name": "static" 00:16:32.301 } 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "vhost_scsi", 00:16:32.301 "config": [] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "vhost_blk", 00:16:32.301 "config": [] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "ublk", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "ublk_create_target", 00:16:32.301 "params": { 00:16:32.301 "cpumask": "1" 00:16:32.301 } 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "method": "ublk_start_disk", 00:16:32.301 "params": { 00:16:32.301 "bdev_name": "malloc0", 00:16:32.301 "ublk_id": 0, 00:16:32.301 "num_queues": 1, 00:16:32.301 "queue_depth": 128 00:16:32.301 } 00:16:32.301 } 00:16:32.301 ] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "nbd", 00:16:32.301 "config": [] 00:16:32.301 }, 00:16:32.301 { 00:16:32.301 "subsystem": "nvmf", 00:16:32.301 "config": [ 00:16:32.301 { 00:16:32.301 "method": "nvmf_set_config", 00:16:32.301 "params": { 00:16:32.301 "discovery_filter": "match_any", 00:16:32.301 "admin_cmd_passthru": { 00:16:32.301 "identify_ctrlr": false 00:16:32.302 }, 00:16:32.302 "dhchap_digests": [ 00:16:32.302 "sha256", 00:16:32.302 "sha384", 00:16:32.302 "sha512" 00:16:32.302 ], 00:16:32.302 "dhchap_dhgroups": [ 00:16:32.302 "null", 00:16:32.302 "ffdhe2048", 00:16:32.302 "ffdhe3072", 00:16:32.302 "ffdhe4096", 00:16:32.302 "ffdhe6144", 00:16:32.302 "ffdhe8192" 00:16:32.302 ] 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "nvmf_set_max_subsystems", 00:16:32.302 "params": { 00:16:32.302 "max_subsystems": 1024 00:16:32.302 } 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "method": "nvmf_set_crdt", 00:16:32.302 "params": { 00:16:32.302 "crdt1": 0, 00:16:32.302 "crdt2": 0, 00:16:32.302 "crdt3": 0 00:16:32.302 } 00:16:32.302 } 00:16:32.302 ] 00:16:32.302 }, 00:16:32.302 { 00:16:32.302 "subsystem": "iscsi", 00:16:32.302 "config": [ 00:16:32.302 { 00:16:32.302 "method": "iscsi_set_options", 00:16:32.302 "params": { 00:16:32.302 "node_base": "iqn.2016-06.io.spdk", 00:16:32.302 "max_sessions": 128, 00:16:32.302 "max_connections_per_session": 2, 00:16:32.302 "max_queue_depth": 64, 00:16:32.302 "default_time2wait": 2, 00:16:32.302 "default_time2retain": 20, 00:16:32.302 "first_burst_length": 8192, 00:16:32.302 "immediate_data": true, 00:16:32.302 "allow_duplicated_isid": false, 00:16:32.302 "error_recovery_level": 0, 00:16:32.302 "nop_timeout": 60, 00:16:32.302 "nop_in_interval": 30, 00:16:32.302 "disable_chap": false, 00:16:32.302 "require_chap": false, 00:16:32.302 "mutual_chap": false, 00:16:32.302 "chap_group": 0, 00:16:32.302 "max_large_datain_per_connection": 64, 00:16:32.302 "max_r2t_per_connection": 4, 00:16:32.302 "pdu_pool_size": 36864, 00:16:32.302 "immediate_data_pool_size": 16384, 00:16:32.302 "data_out_pool_size": 2048 00:16:32.302 } 00:16:32.302 } 00:16:32.302 ] 00:16:32.302 } 00:16:32.302 ] 00:16:32.302 }' 00:16:32.302 20:58:50 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 84683 00:16:32.302 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84683 ']' 00:16:32.302 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84683 00:16:32.302 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:32.302 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.302 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84683 00:16:32.563 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.563 killing process with pid 84683 00:16:32.563 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.563 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84683' 00:16:32.563 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84683 00:16:32.563 20:58:50 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84683 00:16:32.825 [2024-11-20 20:58:50.720300] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:32.825 [2024-11-20 20:58:50.750879] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:32.825 [2024-11-20 20:58:50.751029] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:32.825 [2024-11-20 20:58:50.757791] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:32.825 [2024-11-20 20:58:50.757865] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:32.825 [2024-11-20 20:58:50.757878] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:32.825 [2024-11-20 20:58:50.757905] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:32.825 [2024-11-20 20:58:50.758055] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:33.087 20:58:51 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=84721 00:16:33.087 20:58:51 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 84721 00:16:33.087 20:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 84721 ']' 00:16:33.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.087 20:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.087 20:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.087 20:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.087 20:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.087 20:58:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:33.087 20:58:51 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:33.087 "subsystems": [ 00:16:33.087 { 00:16:33.087 "subsystem": "fsdev", 00:16:33.087 "config": [ 00:16:33.087 { 00:16:33.087 "method": "fsdev_set_opts", 00:16:33.087 "params": { 00:16:33.087 "fsdev_io_pool_size": 65535, 00:16:33.087 "fsdev_io_cache_size": 256 00:16:33.087 } 00:16:33.087 } 00:16:33.087 ] 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "subsystem": "keyring", 00:16:33.087 "config": [] 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "subsystem": "iobuf", 00:16:33.087 "config": [ 00:16:33.087 { 00:16:33.087 "method": "iobuf_set_options", 00:16:33.087 "params": { 00:16:33.087 "small_pool_count": 8192, 00:16:33.087 "large_pool_count": 1024, 00:16:33.087 "small_bufsize": 8192, 00:16:33.087 "large_bufsize": 135168, 00:16:33.087 "enable_numa": false 00:16:33.087 } 00:16:33.087 } 00:16:33.087 ] 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "subsystem": "sock", 00:16:33.087 "config": [ 00:16:33.087 { 00:16:33.087 "method": "sock_set_default_impl", 00:16:33.087 "params": { 00:16:33.087 "impl_name": "posix" 00:16:33.087 } 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "method": "sock_impl_set_options", 00:16:33.087 "params": { 00:16:33.087 "impl_name": "ssl", 00:16:33.087 "recv_buf_size": 4096, 00:16:33.087 "send_buf_size": 4096, 00:16:33.087 "enable_recv_pipe": true, 00:16:33.087 "enable_quickack": false, 00:16:33.087 "enable_placement_id": 0, 00:16:33.087 "enable_zerocopy_send_server": true, 00:16:33.087 "enable_zerocopy_send_client": false, 00:16:33.087 "zerocopy_threshold": 0, 00:16:33.087 "tls_version": 0, 00:16:33.087 "enable_ktls": false 00:16:33.087 } 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "method": "sock_impl_set_options", 00:16:33.087 "params": { 00:16:33.087 "impl_name": "posix", 00:16:33.087 "recv_buf_size": 2097152, 00:16:33.087 "send_buf_size": 2097152, 00:16:33.087 "enable_recv_pipe": true, 00:16:33.087 "enable_quickack": false, 00:16:33.087 "enable_placement_id": 0, 00:16:33.087 "enable_zerocopy_send_server": true, 00:16:33.087 "enable_zerocopy_send_client": false, 00:16:33.087 "zerocopy_threshold": 0, 00:16:33.087 "tls_version": 0, 00:16:33.087 "enable_ktls": false 00:16:33.087 } 00:16:33.087 } 00:16:33.087 ] 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "subsystem": "vmd", 00:16:33.087 "config": [] 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "subsystem": "accel", 00:16:33.087 "config": [ 00:16:33.087 { 00:16:33.087 "method": "accel_set_options", 00:16:33.087 "params": { 00:16:33.087 "small_cache_size": 128, 00:16:33.087 "large_cache_size": 16, 00:16:33.087 "task_count": 2048, 00:16:33.087 "sequence_count": 2048, 00:16:33.087 "buf_count": 2048 00:16:33.087 } 00:16:33.087 } 00:16:33.087 ] 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "subsystem": "bdev", 00:16:33.087 "config": [ 00:16:33.087 { 00:16:33.087 "method": "bdev_set_options", 00:16:33.087 "params": { 00:16:33.087 "bdev_io_pool_size": 65535, 00:16:33.087 "bdev_io_cache_size": 256, 00:16:33.087 "bdev_auto_examine": true, 00:16:33.087 "iobuf_small_cache_size": 128, 00:16:33.087 "iobuf_large_cache_size": 16 00:16:33.087 } 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "method": "bdev_raid_set_options", 00:16:33.087 "params": { 00:16:33.087 "process_window_size_kb": 1024, 00:16:33.087 "process_max_bandwidth_mb_sec": 0 00:16:33.087 } 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "method": "bdev_iscsi_set_options", 00:16:33.087 "params": { 00:16:33.087 "timeout_sec": 30 00:16:33.087 } 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "method": "bdev_nvme_set_options", 00:16:33.087 "params": { 00:16:33.087 "action_on_timeout": "none", 00:16:33.087 "timeout_us": 0, 00:16:33.087 "timeout_admin_us": 0, 00:16:33.087 "keep_alive_timeout_ms": 10000, 00:16:33.087 "arbitration_burst": 0, 00:16:33.087 "low_priority_weight": 0, 00:16:33.087 "medium_priority_weight": 0, 00:16:33.087 "high_priority_weight": 0, 00:16:33.087 "nvme_adminq_poll_period_us": 10000, 00:16:33.087 "nvme_ioq_poll_period_us": 0, 00:16:33.087 "io_queue_requests": 0, 00:16:33.087 "delay_cmd_submit": true, 00:16:33.087 "transport_retry_count": 4, 00:16:33.087 "bdev_retry_count": 3, 00:16:33.087 "transport_ack_timeout": 0, 00:16:33.087 "ctrlr_loss_timeout_sec": 0, 00:16:33.087 "reconnect_delay_sec": 0, 00:16:33.087 "fast_io_fail_timeout_sec": 0, 00:16:33.087 "disable_auto_failback": false, 00:16:33.087 "generate_uuids": false, 00:16:33.087 "transport_tos": 0, 00:16:33.087 "nvme_error_stat": false, 00:16:33.087 "rdma_srq_size": 0, 00:16:33.087 "io_path_stat": false, 00:16:33.087 "allow_accel_sequence": false, 00:16:33.087 "rdma_max_cq_size": 0, 00:16:33.087 "rdma_cm_event_timeout_ms": 0, 00:16:33.087 "dhchap_digests": [ 00:16:33.087 "sha256", 00:16:33.087 "sha384", 00:16:33.087 "sha512" 00:16:33.087 ], 00:16:33.087 "dhchap_dhgroups": [ 00:16:33.087 "null", 00:16:33.087 "ffdhe2048", 00:16:33.087 "ffdhe3072", 00:16:33.087 "ffdhe4096", 00:16:33.087 "ffdhe6144", 00:16:33.087 "ffdhe8192" 00:16:33.087 ] 00:16:33.087 } 00:16:33.087 }, 00:16:33.087 { 00:16:33.087 "method": "bdev_nvme_set_hotplug", 00:16:33.087 "params": { 00:16:33.087 "period_us": 100000, 00:16:33.087 "enable": false 00:16:33.087 } 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "method": "bdev_malloc_create", 00:16:33.088 "params": { 00:16:33.088 "name": "malloc0", 00:16:33.088 "num_blocks": 8192, 00:16:33.088 "block_size": 4096, 00:16:33.088 "physical_block_size": 4096, 00:16:33.088 "uuid": "c14f487d-09d4-4dea-8f22-0f5df003f610", 00:16:33.088 "optimal_io_boundary": 0, 00:16:33.088 "md_size": 0, 00:16:33.088 "dif_type": 0, 00:16:33.088 "dif_is_head_of_md": false, 00:16:33.088 "dif_pi_format": 0 00:16:33.088 } 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "method": "bdev_wait_for_examine" 00:16:33.088 } 00:16:33.088 ] 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "subsystem": "scsi", 00:16:33.088 "config": null 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "subsystem": "scheduler", 00:16:33.088 "config": [ 00:16:33.088 { 00:16:33.088 "method": "framework_set_scheduler", 00:16:33.088 "params": { 00:16:33.088 "name": "static" 00:16:33.088 } 00:16:33.088 } 00:16:33.088 ] 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "subsystem": "vhost_scsi", 00:16:33.088 "config": [] 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "subsystem": "vhost_blk", 00:16:33.088 "config": [] 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "subsystem": "ublk", 00:16:33.088 "config": [ 00:16:33.088 { 00:16:33.088 "method": "ublk_create_target", 00:16:33.088 "params": { 00:16:33.088 "cpumask": "1" 00:16:33.088 } 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "method": "ublk_start_disk", 00:16:33.088 "params": { 00:16:33.088 "bdev_name": "malloc0", 00:16:33.088 "ublk_id": 0, 00:16:33.088 "num_queues": 1, 00:16:33.088 "queue_depth": 128 00:16:33.088 } 00:16:33.088 } 00:16:33.088 ] 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "subsystem": "nbd", 00:16:33.088 "config": [] 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "subsystem": "nvmf", 00:16:33.088 "config": [ 00:16:33.088 { 00:16:33.088 "method": "nvmf_set_config", 00:16:33.088 "params": { 00:16:33.088 "discovery_filter": "match_any", 00:16:33.088 "admin_cmd_passthru": { 00:16:33.088 "identify_ctrlr": false 00:16:33.088 }, 00:16:33.088 "dhchap_digests": [ 00:16:33.088 "sha256", 00:16:33.088 "sha384", 00:16:33.088 "sha512" 00:16:33.088 ], 00:16:33.088 "dhchap_dhgroups": [ 00:16:33.088 "null", 00:16:33.088 "ffdhe2048", 00:16:33.088 "ffdhe3072", 00:16:33.088 "ffdhe4096", 00:16:33.088 "ffdhe6144", 00:16:33.088 "ffdhe8192" 00:16:33.088 ] 00:16:33.088 } 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "method": "nvmf_set_max_subsystems", 00:16:33.088 "params": { 00:16:33.088 "max_subsystems": 1024 00:16:33.088 } 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "method": "nvmf_set_crdt", 00:16:33.088 "params": { 00:16:33.088 "crdt1": 0, 00:16:33.088 "crdt2": 0, 00:16:33.088 "crdt3": 0 00:16:33.088 } 00:16:33.088 } 00:16:33.088 ] 00:16:33.088 }, 00:16:33.088 { 00:16:33.088 "subsystem": "iscsi", 00:16:33.088 "config": [ 00:16:33.088 { 00:16:33.088 "method": "iscsi_set_options", 00:16:33.088 "params": { 00:16:33.088 "node_base": "iqn.2016-06.io.spdk", 00:16:33.088 "max_sessions": 128, 00:16:33.088 "max_connections_per_session": 2, 00:16:33.088 "max_queue_depth": 64, 00:16:33.088 "default_time2wait": 2, 00:16:33.088 "default_time2retain": 20, 00:16:33.088 "first_burst_length": 8192, 00:16:33.088 "immediate_data": true, 00:16:33.088 "allow_duplicated_isid": false, 00:16:33.088 "error_recovery_level": 0, 00:16:33.088 "nop_timeout": 60, 00:16:33.088 "nop_in_interval": 30, 00:16:33.088 "disable_chap": false, 00:16:33.088 "require_chap": false, 00:16:33.088 "mutual_chap": false, 00:16:33.088 "chap_group": 0, 00:16:33.088 "max_large_datain_per_connection": 64, 00:16:33.088 "max_r2t_per_connection": 4, 00:16:33.088 "pdu_pool_size": 36864, 00:16:33.088 "immediate_data_pool_size": 16384, 00:16:33.088 "data_out_pool_size": 2048 00:16:33.088 } 00:16:33.088 } 00:16:33.088 ] 00:16:33.088 } 00:16:33.088 ] 00:16:33.088 }' 00:16:33.088 20:58:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:33.349 [2024-11-20 20:58:51.276493] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:33.349 [2024-11-20 20:58:51.276649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84721 ] 00:16:33.349 [2024-11-20 20:58:51.419187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.349 [2024-11-20 20:58:51.447913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.922 [2024-11-20 20:58:51.830799] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:33.922 [2024-11-20 20:58:51.831181] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:33.922 [2024-11-20 20:58:51.841895] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:33.922 [2024-11-20 20:58:51.841980] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:33.922 [2024-11-20 20:58:51.841988] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:33.922 [2024-11-20 20:58:51.841999] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:33.922 [2024-11-20 20:58:51.857807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:33.923 [2024-11-20 20:58:51.857840] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:33.923 [2024-11-20 20:58:51.865785] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:33.923 [2024-11-20 20:58:51.865901] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:33.923 [2024-11-20 20:58:51.889801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 84721 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 84721 ']' 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 84721 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84721 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.184 killing process with pid 84721 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84721' 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 84721 00:16:34.184 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 84721 00:16:34.445 [2024-11-20 20:58:52.480854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:34.446 [2024-11-20 20:58:52.520890] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:34.446 [2024-11-20 20:58:52.521043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:34.446 [2024-11-20 20:58:52.528785] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:34.446 [2024-11-20 20:58:52.528859] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:34.446 [2024-11-20 20:58:52.528876] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:34.446 [2024-11-20 20:58:52.528911] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:34.446 [2024-11-20 20:58:52.529084] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:35.019 20:58:52 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:35.019 00:16:35.019 real 0m3.850s 00:16:35.019 user 0m2.633s 00:16:35.019 sys 0m1.879s 00:16:35.019 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.019 ************************************ 00:16:35.019 END TEST test_save_ublk_config 00:16:35.019 ************************************ 00:16:35.019 20:58:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:35.019 20:58:53 ublk -- ublk/ublk.sh@139 -- # spdk_pid=84772 00:16:35.019 20:58:53 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:35.019 20:58:53 ublk -- ublk/ublk.sh@141 -- # waitforlisten 84772 00:16:35.019 20:58:53 ublk -- common/autotest_common.sh@835 -- # '[' -z 84772 ']' 00:16:35.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.019 20:58:53 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.019 20:58:53 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:35.019 20:58:53 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.019 20:58:53 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:35.019 20:58:53 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:35.019 20:58:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.019 [2024-11-20 20:58:53.102917] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:35.019 [2024-11-20 20:58:53.103067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84772 ] 00:16:35.281 [2024-11-20 20:58:53.249368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:35.281 [2024-11-20 20:58:53.279906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.281 [2024-11-20 20:58:53.280055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.853 20:58:53 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.853 20:58:53 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:35.853 20:58:53 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:35.853 20:58:53 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:35.853 20:58:53 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.853 20:58:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:35.853 ************************************ 00:16:35.853 START TEST test_create_ublk 00:16:35.853 ************************************ 00:16:35.853 20:58:53 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:35.853 20:58:53 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:35.853 20:58:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.853 20:58:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.115 [2024-11-20 20:58:53.974775] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:36.115 [2024-11-20 20:58:53.976470] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:36.115 20:58:53 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.115 20:58:53 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:36.115 20:58:53 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:36.115 20:58:53 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.115 20:58:53 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.115 20:58:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:36.115 20:58:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.115 20:58:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.115 [2024-11-20 20:58:54.059921] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:36.115 [2024-11-20 20:58:54.060387] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:36.115 [2024-11-20 20:58:54.060401] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:36.115 [2024-11-20 20:58:54.060411] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:36.115 [2024-11-20 20:58:54.067804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:36.115 [2024-11-20 20:58:54.067837] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:36.115 [2024-11-20 20:58:54.075800] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:36.115 [2024-11-20 20:58:54.076553] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:36.115 [2024-11-20 20:58:54.117787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:36.115 20:58:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:36.115 20:58:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.115 20:58:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:36.115 20:58:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:36.115 { 00:16:36.115 "ublk_device": "/dev/ublkb0", 00:16:36.115 "id": 0, 00:16:36.115 "queue_depth": 512, 00:16:36.115 "num_queues": 4, 00:16:36.115 "bdev_name": "Malloc0" 00:16:36.115 } 00:16:36.115 ]' 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:36.115 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:36.377 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:36.377 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:36.377 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:36.377 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:36.377 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:36.377 20:58:54 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:36.377 20:58:54 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:36.377 fio: verification read phase will never start because write phase uses all of runtime 00:16:36.377 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:36.377 fio-3.35 00:16:36.377 Starting 1 process 00:16:48.695 00:16:48.695 fio_test: (groupid=0, jobs=1): err= 0: pid=84815: Wed Nov 20 20:59:04 2024 00:16:48.695 write: IOPS=14.1k, BW=55.1MiB/s (57.8MB/s)(551MiB/10001msec); 0 zone resets 00:16:48.695 clat (usec): min=32, max=10636, avg=70.11, stdev=123.65 00:16:48.695 lat (usec): min=33, max=10652, avg=70.53, stdev=123.69 00:16:48.695 clat percentiles (usec): 00:16:48.695 | 1.00th=[ 55], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:16:48.695 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 65], 00:16:48.695 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 73], 95.00th=[ 77], 00:16:48.695 | 99.00th=[ 95], 99.50th=[ 188], 99.90th=[ 2540], 99.95th=[ 3425], 00:16:48.695 | 99.99th=[ 4178] 00:16:48.695 bw ( KiB/s): min=16752, max=59760, per=99.81%, avg=56350.74, stdev=9760.68, samples=19 00:16:48.695 iops : min= 4188, max=14940, avg=14087.68, stdev=2440.17, samples=19 00:16:48.695 lat (usec) : 50=0.04%, 100=99.14%, 250=0.54%, 500=0.09%, 750=0.01% 00:16:48.695 lat (usec) : 1000=0.01% 00:16:48.695 lat (msec) : 2=0.05%, 4=0.08%, 10=0.04%, 20=0.01% 00:16:48.695 cpu : usr=1.73%, sys=10.12%, ctx=141189, majf=0, minf=795 00:16:48.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:48.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.695 issued rwts: total=0,141157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:48.695 00:16:48.695 Run status group 0 (all jobs): 00:16:48.695 WRITE: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=551MiB (578MB), run=10001-10001msec 00:16:48.695 00:16:48.695 Disk stats (read/write): 00:16:48.695 ublkb0: ios=0/139627, merge=0/0, ticks=0/8743, in_queue=8744, util=99.08% 00:16:48.695 20:59:04 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:48.695 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.695 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.695 [2024-11-20 20:59:04.551998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.695 [2024-11-20 20:59:04.589782] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.695 [2024-11-20 20:59:04.590486] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.695 [2024-11-20 20:59:04.597776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.696 [2024-11-20 20:59:04.598031] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:48.696 [2024-11-20 20:59:04.598049] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:04 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 [2024-11-20 20:59:04.613841] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:48.696 request: 00:16:48.696 { 00:16:48.696 "ublk_id": 0, 00:16:48.696 "method": "ublk_stop_disk", 00:16:48.696 "req_id": 1 00:16:48.696 } 00:16:48.696 Got JSON-RPC error response 00:16:48.696 response: 00:16:48.696 { 00:16:48.696 "code": -19, 00:16:48.696 "message": "No such device" 00:16:48.696 } 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:48.696 20:59:04 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 [2024-11-20 20:59:04.629830] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:48.696 [2024-11-20 20:59:04.631112] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:48.696 [2024-11-20 20:59:04.631142] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:04 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:04 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:48.696 20:59:04 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:04 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:48.696 20:59:04 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:48.696 20:59:04 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:48.696 20:59:04 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:04 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:48.696 20:59:04 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:48.696 20:59:04 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:48.696 00:16:48.696 real 0m10.823s 00:16:48.696 user 0m0.475s 00:16:48.696 sys 0m1.102s 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.696 20:59:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 ************************************ 00:16:48.696 END TEST test_create_ublk 00:16:48.696 ************************************ 00:16:48.696 20:59:04 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:48.696 20:59:04 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:48.696 20:59:04 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.696 20:59:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 ************************************ 00:16:48.696 START TEST test_create_multi_ublk 00:16:48.696 ************************************ 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 [2024-11-20 20:59:04.839759] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:48.696 [2024-11-20 20:59:04.840629] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 [2024-11-20 20:59:04.911881] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:48.696 [2024-11-20 20:59:04.912173] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:48.696 [2024-11-20 20:59:04.912186] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:48.696 [2024-11-20 20:59:04.912192] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:48.696 [2024-11-20 20:59:04.923801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:48.696 [2024-11-20 20:59:04.923819] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:48.696 [2024-11-20 20:59:04.935776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:48.696 [2024-11-20 20:59:04.936252] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:48.696 [2024-11-20 20:59:04.973777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 [2024-11-20 20:59:05.057861] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:48.696 [2024-11-20 20:59:05.058154] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:48.696 [2024-11-20 20:59:05.058166] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:48.696 [2024-11-20 20:59:05.058172] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:48.696 [2024-11-20 20:59:05.069785] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:48.696 [2024-11-20 20:59:05.069803] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:48.696 [2024-11-20 20:59:05.081769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:48.696 [2024-11-20 20:59:05.082252] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:48.696 [2024-11-20 20:59:05.106773] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.696 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.696 [2024-11-20 20:59:05.189849] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:48.696 [2024-11-20 20:59:05.190144] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:48.696 [2024-11-20 20:59:05.190157] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:48.697 [2024-11-20 20:59:05.190162] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:48.697 [2024-11-20 20:59:05.201794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:48.697 [2024-11-20 20:59:05.201811] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:48.697 [2024-11-20 20:59:05.213771] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:48.697 [2024-11-20 20:59:05.214240] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:48.697 [2024-11-20 20:59:05.238785] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.697 [2024-11-20 20:59:05.321864] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:48.697 [2024-11-20 20:59:05.322166] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:48.697 [2024-11-20 20:59:05.322177] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:48.697 [2024-11-20 20:59:05.322183] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:48.697 [2024-11-20 20:59:05.333778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:48.697 [2024-11-20 20:59:05.333799] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:48.697 [2024-11-20 20:59:05.345779] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:48.697 [2024-11-20 20:59:05.346258] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:48.697 [2024-11-20 20:59:05.358774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:48.697 { 00:16:48.697 "ublk_device": "/dev/ublkb0", 00:16:48.697 "id": 0, 00:16:48.697 "queue_depth": 512, 00:16:48.697 "num_queues": 4, 00:16:48.697 "bdev_name": "Malloc0" 00:16:48.697 }, 00:16:48.697 { 00:16:48.697 "ublk_device": "/dev/ublkb1", 00:16:48.697 "id": 1, 00:16:48.697 "queue_depth": 512, 00:16:48.697 "num_queues": 4, 00:16:48.697 "bdev_name": "Malloc1" 00:16:48.697 }, 00:16:48.697 { 00:16:48.697 "ublk_device": "/dev/ublkb2", 00:16:48.697 "id": 2, 00:16:48.697 "queue_depth": 512, 00:16:48.697 "num_queues": 4, 00:16:48.697 "bdev_name": "Malloc2" 00:16:48.697 }, 00:16:48.697 { 00:16:48.697 "ublk_device": "/dev/ublkb3", 00:16:48.697 "id": 3, 00:16:48.697 "queue_depth": 512, 00:16:48.697 "num_queues": 4, 00:16:48.697 "bdev_name": "Malloc3" 00:16:48.697 } 00:16:48.697 ]' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:48.697 20:59:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.697 [2024-11-20 20:59:06.026832] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.697 [2024-11-20 20:59:06.070770] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.697 [2024-11-20 20:59:06.071686] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.697 [2024-11-20 20:59:06.078783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.697 [2024-11-20 20:59:06.079039] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:48.697 [2024-11-20 20:59:06.079050] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.697 [2024-11-20 20:59:06.094838] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.697 [2024-11-20 20:59:06.134810] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.697 [2024-11-20 20:59:06.135625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.697 [2024-11-20 20:59:06.142786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.697 [2024-11-20 20:59:06.143020] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:48.697 [2024-11-20 20:59:06.143031] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.697 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.697 [2024-11-20 20:59:06.158822] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.697 [2024-11-20 20:59:06.192293] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.697 [2024-11-20 20:59:06.193308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.697 [2024-11-20 20:59:06.198771] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.698 [2024-11-20 20:59:06.199009] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:48.698 [2024-11-20 20:59:06.199020] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.698 [2024-11-20 20:59:06.214834] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:48.698 [2024-11-20 20:59:06.253289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:48.698 [2024-11-20 20:59:06.254198] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:48.698 [2024-11-20 20:59:06.258774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:48.698 [2024-11-20 20:59:06.259019] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:48.698 [2024-11-20 20:59:06.259029] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:48.698 [2024-11-20 20:59:06.450817] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:48.698 [2024-11-20 20:59:06.452010] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:48.698 [2024-11-20 20:59:06.452042] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:48.698 00:16:48.698 real 0m1.970s 00:16:48.698 user 0m0.811s 00:16:48.698 sys 0m0.134s 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.698 20:59:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:48.698 ************************************ 00:16:48.698 END TEST test_create_multi_ublk 00:16:48.698 ************************************ 00:16:48.956 20:59:06 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:48.956 20:59:06 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:48.956 20:59:06 ublk -- ublk/ublk.sh@130 -- # killprocess 84772 00:16:48.956 20:59:06 ublk -- common/autotest_common.sh@954 -- # '[' -z 84772 ']' 00:16:48.956 20:59:06 ublk -- common/autotest_common.sh@958 -- # kill -0 84772 00:16:48.956 20:59:06 ublk -- common/autotest_common.sh@959 -- # uname 00:16:48.956 20:59:06 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:48.956 20:59:06 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84772 00:16:48.956 20:59:06 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:48.956 20:59:06 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:48.956 20:59:06 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84772' 00:16:48.956 killing process with pid 84772 00:16:48.956 20:59:06 ublk -- common/autotest_common.sh@973 -- # kill 84772 00:16:48.956 20:59:06 ublk -- common/autotest_common.sh@978 -- # wait 84772 00:16:48.956 [2024-11-20 20:59:07.011159] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:48.956 [2024-11-20 20:59:07.011223] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:49.215 00:16:49.215 real 0m18.333s 00:16:49.215 user 0m28.449s 00:16:49.215 sys 0m7.162s 00:16:49.215 20:59:07 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.215 ************************************ 00:16:49.215 END TEST ublk 00:16:49.215 20:59:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:49.215 ************************************ 00:16:49.216 20:59:07 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:49.216 20:59:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:49.216 20:59:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.216 20:59:07 -- common/autotest_common.sh@10 -- # set +x 00:16:49.216 ************************************ 00:16:49.216 START TEST ublk_recovery 00:16:49.216 ************************************ 00:16:49.216 20:59:07 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:49.477 * Looking for test storage... 00:16:49.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.477 20:59:07 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:49.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.477 --rc genhtml_branch_coverage=1 00:16:49.477 --rc genhtml_function_coverage=1 00:16:49.477 --rc genhtml_legend=1 00:16:49.477 --rc geninfo_all_blocks=1 00:16:49.477 --rc geninfo_unexecuted_blocks=1 00:16:49.477 00:16:49.477 ' 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:49.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.477 --rc genhtml_branch_coverage=1 00:16:49.477 --rc genhtml_function_coverage=1 00:16:49.477 --rc genhtml_legend=1 00:16:49.477 --rc geninfo_all_blocks=1 00:16:49.477 --rc geninfo_unexecuted_blocks=1 00:16:49.477 00:16:49.477 ' 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:49.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.477 --rc genhtml_branch_coverage=1 00:16:49.477 --rc genhtml_function_coverage=1 00:16:49.477 --rc genhtml_legend=1 00:16:49.477 --rc geninfo_all_blocks=1 00:16:49.477 --rc geninfo_unexecuted_blocks=1 00:16:49.477 00:16:49.477 ' 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:49.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.477 --rc genhtml_branch_coverage=1 00:16:49.477 --rc genhtml_function_coverage=1 00:16:49.477 --rc genhtml_legend=1 00:16:49.477 --rc geninfo_all_blocks=1 00:16:49.477 --rc geninfo_unexecuted_blocks=1 00:16:49.477 00:16:49.477 ' 00:16:49.477 20:59:07 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:49.477 20:59:07 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:49.477 20:59:07 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:49.477 20:59:07 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:49.477 20:59:07 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:49.477 20:59:07 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:49.477 20:59:07 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:49.477 20:59:07 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:49.477 20:59:07 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:49.477 20:59:07 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:49.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.477 20:59:07 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=85136 00:16:49.477 20:59:07 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:49.477 20:59:07 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 85136 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85136 ']' 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.477 20:59:07 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:49.477 20:59:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:49.477 [2024-11-20 20:59:07.550561] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:16:49.477 [2024-11-20 20:59:07.550680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85136 ] 00:16:49.736 [2024-11-20 20:59:07.692434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:49.736 [2024-11-20 20:59:07.710654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.736 [2024-11-20 20:59:07.710686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.302 20:59:08 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.302 20:59:08 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:50.302 20:59:08 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:50.302 20:59:08 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.302 20:59:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.302 [2024-11-20 20:59:08.388762] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:50.302 [2024-11-20 20:59:08.389706] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:50.302 20:59:08 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.302 20:59:08 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:50.302 20:59:08 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.302 20:59:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.302 malloc0 00:16:50.302 20:59:08 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.302 20:59:08 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:50.302 20:59:08 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.302 20:59:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:50.561 [2024-11-20 20:59:08.420870] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:50.561 [2024-11-20 20:59:08.420953] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:50.561 [2024-11-20 20:59:08.420958] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:50.561 [2024-11-20 20:59:08.420971] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:50.561 [2024-11-20 20:59:08.429844] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:50.561 [2024-11-20 20:59:08.429863] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:50.561 [2024-11-20 20:59:08.436774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:50.561 [2024-11-20 20:59:08.436887] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:50.561 [2024-11-20 20:59:08.451775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:50.561 1 00:16:50.561 20:59:08 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.561 20:59:08 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:51.496 20:59:09 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=85169 00:16:51.496 20:59:09 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:51.496 20:59:09 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:51.496 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:51.496 fio-3.35 00:16:51.496 Starting 1 process 00:16:56.761 20:59:14 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 85136 00:16:56.761 20:59:14 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:02.050 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 85136 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:02.050 20:59:19 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85281 00:17:02.050 20:59:19 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:02.050 20:59:19 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:02.050 20:59:19 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85281 00:17:02.050 20:59:19 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85281 ']' 00:17:02.050 20:59:19 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.050 20:59:19 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.050 20:59:19 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.050 20:59:19 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.050 20:59:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.050 [2024-11-20 20:59:19.543810] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:02.050 [2024-11-20 20:59:19.543938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85281 ] 00:17:02.050 [2024-11-20 20:59:19.679344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:02.050 [2024-11-20 20:59:19.696770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.050 [2024-11-20 20:59:19.696828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:02.308 20:59:20 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.308 [2024-11-20 20:59:20.333762] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:02.308 [2024-11-20 20:59:20.334728] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.308 20:59:20 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.308 malloc0 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.308 20:59:20 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:02.308 [2024-11-20 20:59:20.365861] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:02.308 [2024-11-20 20:59:20.365888] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:02.308 [2024-11-20 20:59:20.365895] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:02.308 [2024-11-20 20:59:20.373800] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:02.308 [2024-11-20 20:59:20.373815] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:02.308 1 00:17:02.308 20:59:20 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.308 20:59:20 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 85169 00:17:03.682 [2024-11-20 20:59:21.373850] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:03.682 [2024-11-20 20:59:21.381777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:03.682 [2024-11-20 20:59:21.381796] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:04.617 [2024-11-20 20:59:22.381813] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:04.617 [2024-11-20 20:59:22.389772] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:04.617 [2024-11-20 20:59:22.389788] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:05.552 [2024-11-20 20:59:23.389810] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:05.552 [2024-11-20 20:59:23.397765] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:05.552 [2024-11-20 20:59:23.397782] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:05.552 [2024-11-20 20:59:23.397789] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:05.552 [2024-11-20 20:59:23.397849] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:27.470 [2024-11-20 20:59:44.916776] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:27.470 [2024-11-20 20:59:44.921144] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:27.470 [2024-11-20 20:59:44.930942] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:27.470 [2024-11-20 20:59:44.930960] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:54.004 00:17:54.004 fio_test: (groupid=0, jobs=1): err= 0: pid=85172: Wed Nov 20 21:00:09 2024 00:17:54.004 read: IOPS=14.5k, BW=56.6MiB/s (59.3MB/s)(3395MiB/60002msec) 00:17:54.004 slat (nsec): min=874, max=1403.9k, avg=4852.91, stdev=2123.78 00:17:54.004 clat (usec): min=392, max=30474k, avg=4167.34, stdev=248912.12 00:17:54.004 lat (usec): min=653, max=30474k, avg=4172.20, stdev=248912.12 00:17:54.004 clat percentiles (usec): 00:17:54.004 | 1.00th=[ 1762], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1893], 00:17:54.004 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1958], 00:17:54.004 | 70.00th=[ 1975], 80.00th=[ 2008], 90.00th=[ 2507], 95.00th=[ 3261], 00:17:54.004 | 99.00th=[ 5538], 99.50th=[ 5866], 99.90th=[ 8225], 99.95th=[12649], 00:17:54.004 | 99.99th=[13173] 00:17:54.004 bw ( KiB/s): min=29280, max=126728, per=100.00%, avg=115867.25, stdev=19737.91, samples=59 00:17:54.004 iops : min= 7320, max=31682, avg=28966.81, stdev=4934.48, samples=59 00:17:54.004 write: IOPS=14.5k, BW=56.5MiB/s (59.2MB/s)(3390MiB/60002msec); 0 zone resets 00:17:54.004 slat (nsec): min=843, max=454813, avg=4875.86, stdev=1605.69 00:17:54.004 clat (usec): min=568, max=30475k, avg=4665.20, stdev=273650.10 00:17:54.004 lat (usec): min=572, max=30475k, avg=4670.07, stdev=273650.10 00:17:54.004 clat percentiles (usec): 00:17:54.004 | 1.00th=[ 1795], 5.00th=[ 1926], 10.00th=[ 1958], 20.00th=[ 1975], 00:17:54.004 | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040], 00:17:54.004 | 70.00th=[ 2057], 80.00th=[ 2089], 90.00th=[ 2573], 95.00th=[ 3195], 00:17:54.004 | 99.00th=[ 5604], 99.50th=[ 5997], 99.90th=[ 8160], 99.95th=[12649], 00:17:54.004 | 99.99th=[13304] 00:17:54.004 bw ( KiB/s): min=29080, max=126160, per=100.00%, avg=115720.27, stdev=19866.81, samples=59 00:17:54.004 iops : min= 7270, max=31540, avg=28930.07, stdev=4966.70, samples=59 00:17:54.004 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:17:54.004 lat (msec) : 2=55.45%, 4=40.95%, 10=3.52%, 20=0.06%, >=2000=0.01% 00:17:54.004 cpu : usr=3.20%, sys=14.31%, ctx=58146, majf=0, minf=14 00:17:54.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:54.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:54.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:54.004 issued rwts: total=869141,867879,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:54.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:54.004 00:17:54.004 Run status group 0 (all jobs): 00:17:54.004 READ: bw=56.6MiB/s (59.3MB/s), 56.6MiB/s-56.6MiB/s (59.3MB/s-59.3MB/s), io=3395MiB (3560MB), run=60002-60002msec 00:17:54.004 WRITE: bw=56.5MiB/s (59.2MB/s), 56.5MiB/s-56.5MiB/s (59.2MB/s-59.2MB/s), io=3390MiB (3555MB), run=60002-60002msec 00:17:54.004 00:17:54.004 Disk stats (read/write): 00:17:54.004 ublkb1: ios=865831/864537, merge=0/0, ticks=3569290/3926053, in_queue=7495344, util=99.88% 00:17:54.004 21:00:09 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:54.004 21:00:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.004 21:00:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.004 [2024-11-20 21:00:09.709879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:54.004 [2024-11-20 21:00:09.749872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:54.004 [2024-11-20 21:00:09.750023] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:54.004 [2024-11-20 21:00:09.757779] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:54.004 [2024-11-20 21:00:09.757876] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:54.004 [2024-11-20 21:00:09.757892] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:54.004 21:00:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.004 21:00:09 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:54.004 21:00:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.004 21:00:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.005 [2024-11-20 21:00:09.771838] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:54.005 [2024-11-20 21:00:09.773037] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:54.005 [2024-11-20 21:00:09.773064] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.005 21:00:09 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:54.005 21:00:09 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:54.005 21:00:09 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85281 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85281 ']' 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85281 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85281 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:54.005 killing process with pid 85281 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85281' 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85281 00:17:54.005 21:00:09 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85281 00:17:54.005 [2024-11-20 21:00:09.975605] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:54.005 [2024-11-20 21:00:09.975651] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:54.005 00:17:54.005 real 1m2.930s 00:17:54.005 user 1m45.794s 00:17:54.005 sys 0m20.055s 00:17:54.005 21:00:10 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.005 ************************************ 00:17:54.005 END TEST ublk_recovery 00:17:54.005 21:00:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:54.005 ************************************ 00:17:54.005 21:00:10 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:54.005 21:00:10 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:54.005 21:00:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:54.005 21:00:10 -- common/autotest_common.sh@10 -- # set +x 00:17:54.005 21:00:10 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:54.005 21:00:10 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:54.005 21:00:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:54.005 21:00:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.005 21:00:10 -- common/autotest_common.sh@10 -- # set +x 00:17:54.005 ************************************ 00:17:54.005 START TEST ftl 00:17:54.005 ************************************ 00:17:54.005 21:00:10 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:54.005 * Looking for test storage... 00:17:54.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.005 21:00:10 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:54.005 21:00:10 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:54.005 21:00:10 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:54.005 21:00:10 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:54.005 21:00:10 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:54.005 21:00:10 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:54.005 21:00:10 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:54.005 21:00:10 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:54.005 21:00:10 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:54.005 21:00:10 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:54.005 21:00:10 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:54.005 21:00:10 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:54.005 21:00:10 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:54.005 21:00:10 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:54.005 21:00:10 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:54.005 21:00:10 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:54.005 21:00:10 ftl -- scripts/common.sh@345 -- # : 1 00:17:54.005 21:00:10 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:54.005 21:00:10 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:54.005 21:00:10 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:54.005 21:00:10 ftl -- scripts/common.sh@353 -- # local d=1 00:17:54.005 21:00:10 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:54.005 21:00:10 ftl -- scripts/common.sh@355 -- # echo 1 00:17:54.005 21:00:10 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:54.005 21:00:10 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:54.005 21:00:10 ftl -- scripts/common.sh@353 -- # local d=2 00:17:54.005 21:00:10 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:54.005 21:00:10 ftl -- scripts/common.sh@355 -- # echo 2 00:17:54.005 21:00:10 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:54.005 21:00:10 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:54.005 21:00:10 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:54.005 21:00:10 ftl -- scripts/common.sh@368 -- # return 0 00:17:54.005 21:00:10 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:54.005 21:00:10 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:54.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.005 --rc genhtml_branch_coverage=1 00:17:54.005 --rc genhtml_function_coverage=1 00:17:54.005 --rc genhtml_legend=1 00:17:54.005 --rc geninfo_all_blocks=1 00:17:54.005 --rc geninfo_unexecuted_blocks=1 00:17:54.005 00:17:54.005 ' 00:17:54.005 21:00:10 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:54.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.005 --rc genhtml_branch_coverage=1 00:17:54.005 --rc genhtml_function_coverage=1 00:17:54.005 --rc genhtml_legend=1 00:17:54.005 --rc geninfo_all_blocks=1 00:17:54.005 --rc geninfo_unexecuted_blocks=1 00:17:54.005 00:17:54.005 ' 00:17:54.005 21:00:10 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:54.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.005 --rc genhtml_branch_coverage=1 00:17:54.005 --rc genhtml_function_coverage=1 00:17:54.005 --rc genhtml_legend=1 00:17:54.005 --rc geninfo_all_blocks=1 00:17:54.005 --rc geninfo_unexecuted_blocks=1 00:17:54.005 00:17:54.005 ' 00:17:54.005 21:00:10 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:54.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:54.005 --rc genhtml_branch_coverage=1 00:17:54.005 --rc genhtml_function_coverage=1 00:17:54.005 --rc genhtml_legend=1 00:17:54.005 --rc geninfo_all_blocks=1 00:17:54.005 --rc geninfo_unexecuted_blocks=1 00:17:54.005 00:17:54.005 ' 00:17:54.005 21:00:10 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:54.005 21:00:10 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:54.005 21:00:10 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.005 21:00:10 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:54.005 21:00:10 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:54.005 21:00:10 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:54.005 21:00:10 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.005 21:00:10 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:54.005 21:00:10 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:54.005 21:00:10 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.005 21:00:10 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.005 21:00:10 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:54.005 21:00:10 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:54.005 21:00:10 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:54.005 21:00:10 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:54.005 21:00:10 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:54.005 21:00:10 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:54.005 21:00:10 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.005 21:00:10 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:54.005 21:00:10 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:54.005 21:00:10 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:54.005 21:00:10 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:54.005 21:00:10 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:54.006 21:00:10 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:54.006 21:00:10 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:54.006 21:00:10 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:54.006 21:00:10 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:54.006 21:00:10 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:54.006 21:00:10 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:54.006 21:00:10 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:54.006 21:00:10 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:54.006 21:00:10 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:54.006 21:00:10 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:54.006 21:00:10 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:54.006 21:00:10 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:54.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:54.006 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:54.006 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:54.006 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:54.006 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:54.006 21:00:10 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=86077 00:17:54.006 21:00:10 ftl -- ftl/ftl.sh@38 -- # waitforlisten 86077 00:17:54.006 21:00:10 ftl -- common/autotest_common.sh@835 -- # '[' -z 86077 ']' 00:17:54.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.006 21:00:10 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.006 21:00:10 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.006 21:00:10 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.006 21:00:10 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.006 21:00:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:54.006 21:00:10 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:54.006 [2024-11-20 21:00:11.085646] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:54.006 [2024-11-20 21:00:11.085819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86077 ] 00:17:54.006 [2024-11-20 21:00:11.224923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.006 [2024-11-20 21:00:11.254648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.006 21:00:11 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.006 21:00:11 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:54.006 21:00:11 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:54.267 21:00:12 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:54.528 21:00:12 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:54.528 21:00:12 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:55.099 21:00:13 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:55.099 21:00:13 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:55.099 21:00:13 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:55.359 21:00:13 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:55.359 21:00:13 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:55.359 21:00:13 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:55.359 21:00:13 ftl -- ftl/ftl.sh@50 -- # break 00:17:55.359 21:00:13 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:55.359 21:00:13 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:55.359 21:00:13 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:55.359 21:00:13 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:55.620 21:00:13 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:55.620 21:00:13 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:55.620 21:00:13 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:55.620 21:00:13 ftl -- ftl/ftl.sh@63 -- # break 00:17:55.620 21:00:13 ftl -- ftl/ftl.sh@66 -- # killprocess 86077 00:17:55.620 21:00:13 ftl -- common/autotest_common.sh@954 -- # '[' -z 86077 ']' 00:17:55.620 21:00:13 ftl -- common/autotest_common.sh@958 -- # kill -0 86077 00:17:55.620 21:00:13 ftl -- common/autotest_common.sh@959 -- # uname 00:17:55.620 21:00:13 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.620 21:00:13 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86077 00:17:55.620 21:00:13 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.620 killing process with pid 86077 00:17:55.620 21:00:13 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.620 21:00:13 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86077' 00:17:55.620 21:00:13 ftl -- common/autotest_common.sh@973 -- # kill 86077 00:17:55.620 21:00:13 ftl -- common/autotest_common.sh@978 -- # wait 86077 00:17:55.881 21:00:13 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:55.881 21:00:13 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:55.881 21:00:13 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:55.881 21:00:13 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.881 21:00:13 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:55.881 ************************************ 00:17:55.881 START TEST ftl_fio_basic 00:17:55.881 ************************************ 00:17:55.881 21:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:55.881 * Looking for test storage... 00:17:55.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:55.881 21:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:55.881 21:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:55.881 21:00:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:56.143 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:56.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.144 --rc genhtml_branch_coverage=1 00:17:56.144 --rc genhtml_function_coverage=1 00:17:56.144 --rc genhtml_legend=1 00:17:56.144 --rc geninfo_all_blocks=1 00:17:56.144 --rc geninfo_unexecuted_blocks=1 00:17:56.144 00:17:56.144 ' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:56.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.144 --rc genhtml_branch_coverage=1 00:17:56.144 --rc genhtml_function_coverage=1 00:17:56.144 --rc genhtml_legend=1 00:17:56.144 --rc geninfo_all_blocks=1 00:17:56.144 --rc geninfo_unexecuted_blocks=1 00:17:56.144 00:17:56.144 ' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:56.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.144 --rc genhtml_branch_coverage=1 00:17:56.144 --rc genhtml_function_coverage=1 00:17:56.144 --rc genhtml_legend=1 00:17:56.144 --rc geninfo_all_blocks=1 00:17:56.144 --rc geninfo_unexecuted_blocks=1 00:17:56.144 00:17:56.144 ' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:56.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:56.144 --rc genhtml_branch_coverage=1 00:17:56.144 --rc genhtml_function_coverage=1 00:17:56.144 --rc genhtml_legend=1 00:17:56.144 --rc geninfo_all_blocks=1 00:17:56.144 --rc geninfo_unexecuted_blocks=1 00:17:56.144 00:17:56.144 ' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86198 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86198 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86198 ']' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:56.144 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:56.144 [2024-11-20 21:00:14.167653] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:17:56.144 [2024-11-20 21:00:14.168221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86198 ] 00:17:56.405 [2024-11-20 21:00:14.314738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:56.405 [2024-11-20 21:00:14.346252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.405 [2024-11-20 21:00:14.346618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.405 [2024-11-20 21:00:14.346880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.977 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.977 21:00:14 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:56.977 21:00:14 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:56.977 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:56.977 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:56.977 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:56.977 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:56.977 21:00:14 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:57.241 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:57.241 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:57.241 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:57.241 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:57.241 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:57.241 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:57.241 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:57.241 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:57.502 { 00:17:57.502 "name": "nvme0n1", 00:17:57.502 "aliases": [ 00:17:57.502 "69b233d7-b90f-4617-9fa9-04d368a010c1" 00:17:57.502 ], 00:17:57.502 "product_name": "NVMe disk", 00:17:57.502 "block_size": 4096, 00:17:57.502 "num_blocks": 1310720, 00:17:57.502 "uuid": "69b233d7-b90f-4617-9fa9-04d368a010c1", 00:17:57.502 "numa_id": -1, 00:17:57.502 "assigned_rate_limits": { 00:17:57.502 "rw_ios_per_sec": 0, 00:17:57.502 "rw_mbytes_per_sec": 0, 00:17:57.502 "r_mbytes_per_sec": 0, 00:17:57.502 "w_mbytes_per_sec": 0 00:17:57.502 }, 00:17:57.502 "claimed": false, 00:17:57.502 "zoned": false, 00:17:57.502 "supported_io_types": { 00:17:57.502 "read": true, 00:17:57.502 "write": true, 00:17:57.502 "unmap": true, 00:17:57.502 "flush": true, 00:17:57.502 "reset": true, 00:17:57.502 "nvme_admin": true, 00:17:57.502 "nvme_io": true, 00:17:57.502 "nvme_io_md": false, 00:17:57.502 "write_zeroes": true, 00:17:57.502 "zcopy": false, 00:17:57.502 "get_zone_info": false, 00:17:57.502 "zone_management": false, 00:17:57.502 "zone_append": false, 00:17:57.502 "compare": true, 00:17:57.502 "compare_and_write": false, 00:17:57.502 "abort": true, 00:17:57.502 "seek_hole": false, 00:17:57.502 "seek_data": false, 00:17:57.502 "copy": true, 00:17:57.502 "nvme_iov_md": false 00:17:57.502 }, 00:17:57.502 "driver_specific": { 00:17:57.502 "nvme": [ 00:17:57.502 { 00:17:57.502 "pci_address": "0000:00:11.0", 00:17:57.502 "trid": { 00:17:57.502 "trtype": "PCIe", 00:17:57.502 "traddr": "0000:00:11.0" 00:17:57.502 }, 00:17:57.502 "ctrlr_data": { 00:17:57.502 "cntlid": 0, 00:17:57.502 "vendor_id": "0x1b36", 00:17:57.502 "model_number": "QEMU NVMe Ctrl", 00:17:57.502 "serial_number": "12341", 00:17:57.502 "firmware_revision": "8.0.0", 00:17:57.502 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:57.502 "oacs": { 00:17:57.502 "security": 0, 00:17:57.502 "format": 1, 00:17:57.502 "firmware": 0, 00:17:57.502 "ns_manage": 1 00:17:57.502 }, 00:17:57.502 "multi_ctrlr": false, 00:17:57.502 "ana_reporting": false 00:17:57.502 }, 00:17:57.502 "vs": { 00:17:57.502 "nvme_version": "1.4" 00:17:57.502 }, 00:17:57.502 "ns_data": { 00:17:57.502 "id": 1, 00:17:57.502 "can_share": false 00:17:57.502 } 00:17:57.502 } 00:17:57.502 ], 00:17:57.502 "mp_policy": "active_passive" 00:17:57.502 } 00:17:57.502 } 00:17:57.502 ]' 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:57.502 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:57.764 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:57.764 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:58.025 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=ad8d0361-6cfd-495d-a29a-6e7c2d24e0f4 00:17:58.025 21:00:15 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ad8d0361-6cfd-495d-a29a-6e7c2d24e0f4 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:58.025 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:58.287 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:58.287 { 00:17:58.287 "name": "62df91ae-7db9-4d07-b475-4ba3e53a6529", 00:17:58.287 "aliases": [ 00:17:58.287 "lvs/nvme0n1p0" 00:17:58.287 ], 00:17:58.287 "product_name": "Logical Volume", 00:17:58.287 "block_size": 4096, 00:17:58.287 "num_blocks": 26476544, 00:17:58.287 "uuid": "62df91ae-7db9-4d07-b475-4ba3e53a6529", 00:17:58.287 "assigned_rate_limits": { 00:17:58.287 "rw_ios_per_sec": 0, 00:17:58.287 "rw_mbytes_per_sec": 0, 00:17:58.287 "r_mbytes_per_sec": 0, 00:17:58.287 "w_mbytes_per_sec": 0 00:17:58.287 }, 00:17:58.287 "claimed": false, 00:17:58.287 "zoned": false, 00:17:58.287 "supported_io_types": { 00:17:58.287 "read": true, 00:17:58.287 "write": true, 00:17:58.287 "unmap": true, 00:17:58.287 "flush": false, 00:17:58.287 "reset": true, 00:17:58.287 "nvme_admin": false, 00:17:58.287 "nvme_io": false, 00:17:58.287 "nvme_io_md": false, 00:17:58.287 "write_zeroes": true, 00:17:58.287 "zcopy": false, 00:17:58.287 "get_zone_info": false, 00:17:58.287 "zone_management": false, 00:17:58.287 "zone_append": false, 00:17:58.287 "compare": false, 00:17:58.287 "compare_and_write": false, 00:17:58.287 "abort": false, 00:17:58.287 "seek_hole": true, 00:17:58.287 "seek_data": true, 00:17:58.287 "copy": false, 00:17:58.287 "nvme_iov_md": false 00:17:58.287 }, 00:17:58.287 "driver_specific": { 00:17:58.287 "lvol": { 00:17:58.287 "lvol_store_uuid": "ad8d0361-6cfd-495d-a29a-6e7c2d24e0f4", 00:17:58.287 "base_bdev": "nvme0n1", 00:17:58.287 "thin_provision": true, 00:17:58.287 "num_allocated_clusters": 0, 00:17:58.287 "snapshot": false, 00:17:58.287 "clone": false, 00:17:58.287 "esnap_clone": false 00:17:58.287 } 00:17:58.287 } 00:17:58.287 } 00:17:58.287 ]' 00:17:58.287 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:58.287 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:58.287 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:58.287 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:58.287 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:58.287 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:58.287 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:58.287 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:58.287 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:58.549 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:58.549 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:58.549 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:58.549 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:58.549 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:58.549 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:58.549 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:58.549 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:58.810 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:58.810 { 00:17:58.810 "name": "62df91ae-7db9-4d07-b475-4ba3e53a6529", 00:17:58.810 "aliases": [ 00:17:58.810 "lvs/nvme0n1p0" 00:17:58.810 ], 00:17:58.810 "product_name": "Logical Volume", 00:17:58.810 "block_size": 4096, 00:17:58.810 "num_blocks": 26476544, 00:17:58.810 "uuid": "62df91ae-7db9-4d07-b475-4ba3e53a6529", 00:17:58.810 "assigned_rate_limits": { 00:17:58.810 "rw_ios_per_sec": 0, 00:17:58.810 "rw_mbytes_per_sec": 0, 00:17:58.810 "r_mbytes_per_sec": 0, 00:17:58.810 "w_mbytes_per_sec": 0 00:17:58.810 }, 00:17:58.810 "claimed": false, 00:17:58.810 "zoned": false, 00:17:58.810 "supported_io_types": { 00:17:58.810 "read": true, 00:17:58.810 "write": true, 00:17:58.810 "unmap": true, 00:17:58.810 "flush": false, 00:17:58.810 "reset": true, 00:17:58.810 "nvme_admin": false, 00:17:58.810 "nvme_io": false, 00:17:58.810 "nvme_io_md": false, 00:17:58.810 "write_zeroes": true, 00:17:58.810 "zcopy": false, 00:17:58.810 "get_zone_info": false, 00:17:58.810 "zone_management": false, 00:17:58.810 "zone_append": false, 00:17:58.810 "compare": false, 00:17:58.810 "compare_and_write": false, 00:17:58.810 "abort": false, 00:17:58.810 "seek_hole": true, 00:17:58.810 "seek_data": true, 00:17:58.810 "copy": false, 00:17:58.810 "nvme_iov_md": false 00:17:58.810 }, 00:17:58.810 "driver_specific": { 00:17:58.810 "lvol": { 00:17:58.810 "lvol_store_uuid": "ad8d0361-6cfd-495d-a29a-6e7c2d24e0f4", 00:17:58.810 "base_bdev": "nvme0n1", 00:17:58.810 "thin_provision": true, 00:17:58.810 "num_allocated_clusters": 0, 00:17:58.810 "snapshot": false, 00:17:58.810 "clone": false, 00:17:58.810 "esnap_clone": false 00:17:58.810 } 00:17:58.810 } 00:17:58.810 } 00:17:58.810 ]' 00:17:58.810 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:58.810 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:58.810 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:58.810 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:58.810 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:58.810 21:00:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:58.810 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:58.810 21:00:16 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:59.069 21:00:17 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:59.069 21:00:17 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:59.069 21:00:17 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:59.069 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:59.069 21:00:17 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:59.069 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:59.069 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:59.069 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:59.069 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:59.069 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 62df91ae-7db9-4d07-b475-4ba3e53a6529 00:17:59.327 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:59.327 { 00:17:59.327 "name": "62df91ae-7db9-4d07-b475-4ba3e53a6529", 00:17:59.327 "aliases": [ 00:17:59.327 "lvs/nvme0n1p0" 00:17:59.327 ], 00:17:59.327 "product_name": "Logical Volume", 00:17:59.327 "block_size": 4096, 00:17:59.327 "num_blocks": 26476544, 00:17:59.327 "uuid": "62df91ae-7db9-4d07-b475-4ba3e53a6529", 00:17:59.327 "assigned_rate_limits": { 00:17:59.327 "rw_ios_per_sec": 0, 00:17:59.327 "rw_mbytes_per_sec": 0, 00:17:59.327 "r_mbytes_per_sec": 0, 00:17:59.327 "w_mbytes_per_sec": 0 00:17:59.327 }, 00:17:59.327 "claimed": false, 00:17:59.327 "zoned": false, 00:17:59.327 "supported_io_types": { 00:17:59.327 "read": true, 00:17:59.327 "write": true, 00:17:59.327 "unmap": true, 00:17:59.327 "flush": false, 00:17:59.327 "reset": true, 00:17:59.327 "nvme_admin": false, 00:17:59.327 "nvme_io": false, 00:17:59.327 "nvme_io_md": false, 00:17:59.327 "write_zeroes": true, 00:17:59.327 "zcopy": false, 00:17:59.327 "get_zone_info": false, 00:17:59.327 "zone_management": false, 00:17:59.327 "zone_append": false, 00:17:59.327 "compare": false, 00:17:59.327 "compare_and_write": false, 00:17:59.327 "abort": false, 00:17:59.327 "seek_hole": true, 00:17:59.327 "seek_data": true, 00:17:59.327 "copy": false, 00:17:59.327 "nvme_iov_md": false 00:17:59.327 }, 00:17:59.327 "driver_specific": { 00:17:59.327 "lvol": { 00:17:59.327 "lvol_store_uuid": "ad8d0361-6cfd-495d-a29a-6e7c2d24e0f4", 00:17:59.327 "base_bdev": "nvme0n1", 00:17:59.327 "thin_provision": true, 00:17:59.327 "num_allocated_clusters": 0, 00:17:59.327 "snapshot": false, 00:17:59.327 "clone": false, 00:17:59.327 "esnap_clone": false 00:17:59.327 } 00:17:59.327 } 00:17:59.327 } 00:17:59.327 ]' 00:17:59.327 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:59.327 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:59.327 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:59.327 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:59.327 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:59.327 21:00:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:59.327 21:00:17 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:59.327 21:00:17 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:59.327 21:00:17 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 62df91ae-7db9-4d07-b475-4ba3e53a6529 -c nvc0n1p0 --l2p_dram_limit 60 00:17:59.586 [2024-11-20 21:00:17.543473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.586 [2024-11-20 21:00:17.543860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:59.586 [2024-11-20 21:00:17.543924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:59.586 [2024-11-20 21:00:17.543960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.586 [2024-11-20 21:00:17.544060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.586 [2024-11-20 21:00:17.544101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.586 [2024-11-20 21:00:17.544138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:59.586 [2024-11-20 21:00:17.544176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.586 [2024-11-20 21:00:17.544222] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:59.586 [2024-11-20 21:00:17.544445] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:59.586 [2024-11-20 21:00:17.544503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.586 [2024-11-20 21:00:17.544548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.586 [2024-11-20 21:00:17.544579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:17:59.586 [2024-11-20 21:00:17.544613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.586 [2024-11-20 21:00:17.544781] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 32b63e44-37a8-4148-91b0-e0c18ba4cbdc 00:17:59.586 [2024-11-20 21:00:17.545859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.586 [2024-11-20 21:00:17.545931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:59.586 [2024-11-20 21:00:17.545969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:59.586 [2024-11-20 21:00:17.546001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.586 [2024-11-20 21:00:17.551097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.586 [2024-11-20 21:00:17.551168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.586 [2024-11-20 21:00:17.551180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:17:59.586 [2024-11-20 21:00:17.551195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.586 [2024-11-20 21:00:17.551278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.586 [2024-11-20 21:00:17.551289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.586 [2024-11-20 21:00:17.551298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:59.586 [2024-11-20 21:00:17.551312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.586 [2024-11-20 21:00:17.551359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.586 [2024-11-20 21:00:17.551374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:59.586 [2024-11-20 21:00:17.551382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:59.586 [2024-11-20 21:00:17.551388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.586 [2024-11-20 21:00:17.551416] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:59.586 [2024-11-20 21:00:17.552698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.586 [2024-11-20 21:00:17.552720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.586 [2024-11-20 21:00:17.552728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:17:59.586 [2024-11-20 21:00:17.552735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.586 [2024-11-20 21:00:17.552791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.586 [2024-11-20 21:00:17.552799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:59.586 [2024-11-20 21:00:17.552806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:59.586 [2024-11-20 21:00:17.552814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.586 [2024-11-20 21:00:17.552844] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:59.586 [2024-11-20 21:00:17.552962] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:59.587 [2024-11-20 21:00:17.552976] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:59.587 [2024-11-20 21:00:17.552986] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:59.587 [2024-11-20 21:00:17.553001] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553013] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553019] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:59.587 [2024-11-20 21:00:17.553026] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:59.587 [2024-11-20 21:00:17.553031] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:59.587 [2024-11-20 21:00:17.553037] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:59.587 [2024-11-20 21:00:17.553043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.587 [2024-11-20 21:00:17.553050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:59.587 [2024-11-20 21:00:17.553057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:17:59.587 [2024-11-20 21:00:17.553064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.587 [2024-11-20 21:00:17.553142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.587 [2024-11-20 21:00:17.553155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:59.587 [2024-11-20 21:00:17.553163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:59.587 [2024-11-20 21:00:17.553170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.587 [2024-11-20 21:00:17.553261] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:59.587 [2024-11-20 21:00:17.553274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:59.587 [2024-11-20 21:00:17.553281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:59.587 [2024-11-20 21:00:17.553308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:59.587 [2024-11-20 21:00:17.553325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:59.587 [2024-11-20 21:00:17.553338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:59.587 [2024-11-20 21:00:17.553345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:59.587 [2024-11-20 21:00:17.553350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:59.587 [2024-11-20 21:00:17.553359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:59.587 [2024-11-20 21:00:17.553365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:59.587 [2024-11-20 21:00:17.553372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:59.587 [2024-11-20 21:00:17.553389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:59.587 [2024-11-20 21:00:17.553408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:59.587 [2024-11-20 21:00:17.553428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:59.587 [2024-11-20 21:00:17.553446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:59.587 [2024-11-20 21:00:17.553467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:59.587 [2024-11-20 21:00:17.553487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:59.587 [2024-11-20 21:00:17.553503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:59.587 [2024-11-20 21:00:17.553510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:59.587 [2024-11-20 21:00:17.553516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:59.587 [2024-11-20 21:00:17.553523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:59.587 [2024-11-20 21:00:17.553528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:59.587 [2024-11-20 21:00:17.553536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553542] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:59.587 [2024-11-20 21:00:17.553549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:59.587 [2024-11-20 21:00:17.553554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553561] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:59.587 [2024-11-20 21:00:17.553575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:59.587 [2024-11-20 21:00:17.553584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:59.587 [2024-11-20 21:00:17.553600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:59.587 [2024-11-20 21:00:17.553608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:59.587 [2024-11-20 21:00:17.553616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:59.587 [2024-11-20 21:00:17.553622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:59.587 [2024-11-20 21:00:17.553629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:59.587 [2024-11-20 21:00:17.553635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:59.587 [2024-11-20 21:00:17.553645] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:59.587 [2024-11-20 21:00:17.553661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:59.587 [2024-11-20 21:00:17.553678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:59.587 [2024-11-20 21:00:17.553684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:59.587 [2024-11-20 21:00:17.553692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:59.587 [2024-11-20 21:00:17.553698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:59.587 [2024-11-20 21:00:17.553706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:59.587 [2024-11-20 21:00:17.553712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:59.587 [2024-11-20 21:00:17.553720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:59.587 [2024-11-20 21:00:17.553726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:59.587 [2024-11-20 21:00:17.553734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:59.587 [2024-11-20 21:00:17.553739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:59.587 [2024-11-20 21:00:17.553755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:59.587 [2024-11-20 21:00:17.553761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:59.587 [2024-11-20 21:00:17.553767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:59.587 [2024-11-20 21:00:17.553773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:59.587 [2024-11-20 21:00:17.553779] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:59.587 [2024-11-20 21:00:17.553785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:59.588 [2024-11-20 21:00:17.553792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:59.588 [2024-11-20 21:00:17.553798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:59.588 [2024-11-20 21:00:17.553805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:59.588 [2024-11-20 21:00:17.553810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:59.588 [2024-11-20 21:00:17.553817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.588 [2024-11-20 21:00:17.553822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:59.588 [2024-11-20 21:00:17.553841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:17:59.588 [2024-11-20 21:00:17.553846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.588 [2024-11-20 21:00:17.553906] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:59.588 [2024-11-20 21:00:17.553921] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:02.163 [2024-11-20 21:00:19.852544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.163 [2024-11-20 21:00:19.852600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:02.163 [2024-11-20 21:00:19.852612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2298.625 ms 00:18:02.163 [2024-11-20 21:00:19.852619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.163 [2024-11-20 21:00:19.860588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.163 [2024-11-20 21:00:19.860622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.163 [2024-11-20 21:00:19.860633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.880 ms 00:18:02.163 [2024-11-20 21:00:19.860641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.163 [2024-11-20 21:00:19.860720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.163 [2024-11-20 21:00:19.860736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:02.163 [2024-11-20 21:00:19.860754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:02.163 [2024-11-20 21:00:19.860761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.163 [2024-11-20 21:00:19.878339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.163 [2024-11-20 21:00:19.878383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.163 [2024-11-20 21:00:19.878396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.521 ms 00:18:02.163 [2024-11-20 21:00:19.878402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.163 [2024-11-20 21:00:19.878436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.163 [2024-11-20 21:00:19.878443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.164 [2024-11-20 21:00:19.878451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:02.164 [2024-11-20 21:00:19.878457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.878812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.878831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.164 [2024-11-20 21:00:19.878839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:18:02.164 [2024-11-20 21:00:19.878857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.878963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.878975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.164 [2024-11-20 21:00:19.878984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:02.164 [2024-11-20 21:00:19.878990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.884397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.884439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.164 [2024-11-20 21:00:19.884454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.362 ms 00:18:02.164 [2024-11-20 21:00:19.884478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.895641] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:02.164 [2024-11-20 21:00:19.908145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.908175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:02.164 [2024-11-20 21:00:19.908183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.563 ms 00:18:02.164 [2024-11-20 21:00:19.908191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.940868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.940904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:02.164 [2024-11-20 21:00:19.940912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.642 ms 00:18:02.164 [2024-11-20 21:00:19.940921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.941067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.941079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:02.164 [2024-11-20 21:00:19.941086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:18:02.164 [2024-11-20 21:00:19.941092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.943420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.943453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:02.164 [2024-11-20 21:00:19.943461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.289 ms 00:18:02.164 [2024-11-20 21:00:19.943476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.945365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.945393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:02.164 [2024-11-20 21:00:19.945401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.850 ms 00:18:02.164 [2024-11-20 21:00:19.945407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.945649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.945664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:02.164 [2024-11-20 21:00:19.945670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:18:02.164 [2024-11-20 21:00:19.945679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.963031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.963065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:02.164 [2024-11-20 21:00:19.963074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.321 ms 00:18:02.164 [2024-11-20 21:00:19.963082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.966169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.966205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:02.164 [2024-11-20 21:00:19.966213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:18:02.164 [2024-11-20 21:00:19.966227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.968440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.968469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:02.164 [2024-11-20 21:00:19.968476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:18:02.164 [2024-11-20 21:00:19.968482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.970921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.970953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:02.164 [2024-11-20 21:00:19.970960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:18:02.164 [2024-11-20 21:00:19.970968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.971010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.971018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:02.164 [2024-11-20 21:00:19.971025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:02.164 [2024-11-20 21:00:19.971032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.971091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.164 [2024-11-20 21:00:19.971099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:02.164 [2024-11-20 21:00:19.971107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:02.164 [2024-11-20 21:00:19.971115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.164 [2024-11-20 21:00:19.971901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2428.072 ms, result 0 00:18:02.164 { 00:18:02.164 "name": "ftl0", 00:18:02.164 "uuid": "32b63e44-37a8-4148-91b0-e0c18ba4cbdc" 00:18:02.164 } 00:18:02.164 21:00:19 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:02.164 21:00:19 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:02.164 21:00:19 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:02.164 21:00:19 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:02.164 21:00:19 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:02.164 21:00:19 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:02.164 21:00:19 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:02.164 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:02.442 [ 00:18:02.442 { 00:18:02.442 "name": "ftl0", 00:18:02.442 "aliases": [ 00:18:02.442 "32b63e44-37a8-4148-91b0-e0c18ba4cbdc" 00:18:02.442 ], 00:18:02.442 "product_name": "FTL disk", 00:18:02.442 "block_size": 4096, 00:18:02.442 "num_blocks": 20971520, 00:18:02.442 "uuid": "32b63e44-37a8-4148-91b0-e0c18ba4cbdc", 00:18:02.442 "assigned_rate_limits": { 00:18:02.442 "rw_ios_per_sec": 0, 00:18:02.443 "rw_mbytes_per_sec": 0, 00:18:02.443 "r_mbytes_per_sec": 0, 00:18:02.443 "w_mbytes_per_sec": 0 00:18:02.443 }, 00:18:02.443 "claimed": false, 00:18:02.443 "zoned": false, 00:18:02.443 "supported_io_types": { 00:18:02.443 "read": true, 00:18:02.443 "write": true, 00:18:02.443 "unmap": true, 00:18:02.443 "flush": true, 00:18:02.443 "reset": false, 00:18:02.443 "nvme_admin": false, 00:18:02.443 "nvme_io": false, 00:18:02.443 "nvme_io_md": false, 00:18:02.443 "write_zeroes": true, 00:18:02.443 "zcopy": false, 00:18:02.443 "get_zone_info": false, 00:18:02.443 "zone_management": false, 00:18:02.443 "zone_append": false, 00:18:02.443 "compare": false, 00:18:02.443 "compare_and_write": false, 00:18:02.443 "abort": false, 00:18:02.443 "seek_hole": false, 00:18:02.443 "seek_data": false, 00:18:02.443 "copy": false, 00:18:02.443 "nvme_iov_md": false 00:18:02.443 }, 00:18:02.443 "driver_specific": { 00:18:02.443 "ftl": { 00:18:02.443 "base_bdev": "62df91ae-7db9-4d07-b475-4ba3e53a6529", 00:18:02.443 "cache": "nvc0n1p0" 00:18:02.443 } 00:18:02.443 } 00:18:02.443 } 00:18:02.443 ] 00:18:02.443 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:02.443 21:00:20 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:02.443 21:00:20 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:02.702 21:00:20 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:02.702 21:00:20 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:02.702 [2024-11-20 21:00:20.762810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.702 [2024-11-20 21:00:20.762849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:02.702 [2024-11-20 21:00:20.762860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:02.703 [2024-11-20 21:00:20.762867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.762900] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:02.703 [2024-11-20 21:00:20.763321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.763344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:02.703 [2024-11-20 21:00:20.763353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:18:02.703 [2024-11-20 21:00:20.763360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.763804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.763861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:02.703 [2024-11-20 21:00:20.763872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:18:02.703 [2024-11-20 21:00:20.763880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.766282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.766300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:02.703 [2024-11-20 21:00:20.766308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.381 ms 00:18:02.703 [2024-11-20 21:00:20.766315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.770988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.771015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:02.703 [2024-11-20 21:00:20.771023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.630 ms 00:18:02.703 [2024-11-20 21:00:20.771031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.772479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.772511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:02.703 [2024-11-20 21:00:20.772517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:18:02.703 [2024-11-20 21:00:20.772525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.776174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.776206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:02.703 [2024-11-20 21:00:20.776216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:18:02.703 [2024-11-20 21:00:20.776223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.776369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.776385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:02.703 [2024-11-20 21:00:20.776392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:18:02.703 [2024-11-20 21:00:20.776399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.777415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.777444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:02.703 [2024-11-20 21:00:20.777451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:18:02.703 [2024-11-20 21:00:20.777458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.778291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.778322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:02.703 [2024-11-20 21:00:20.778328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:18:02.703 [2024-11-20 21:00:20.778335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.779117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.779147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:02.703 [2024-11-20 21:00:20.779154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:18:02.703 [2024-11-20 21:00:20.779161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.779875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.703 [2024-11-20 21:00:20.779902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:02.703 [2024-11-20 21:00:20.779909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.634 ms 00:18:02.703 [2024-11-20 21:00:20.779916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.703 [2024-11-20 21:00:20.779942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:02.703 [2024-11-20 21:00:20.779955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.779962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.779970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.779976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.779987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.779993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:02.703 [2024-11-20 21:00:20.780342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:02.704 [2024-11-20 21:00:20.780626] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:02.704 [2024-11-20 21:00:20.780632] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 32b63e44-37a8-4148-91b0-e0c18ba4cbdc 00:18:02.704 [2024-11-20 21:00:20.780649] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:02.704 [2024-11-20 21:00:20.780662] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:02.704 [2024-11-20 21:00:20.780669] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:02.704 [2024-11-20 21:00:20.780675] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:02.704 [2024-11-20 21:00:20.780682] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:02.704 [2024-11-20 21:00:20.780688] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:02.704 [2024-11-20 21:00:20.780695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:02.704 [2024-11-20 21:00:20.780700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:02.704 [2024-11-20 21:00:20.780706] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:02.704 [2024-11-20 21:00:20.780711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.704 [2024-11-20 21:00:20.780718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:02.704 [2024-11-20 21:00:20.780724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:18:02.704 [2024-11-20 21:00:20.780730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.782108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.704 [2024-11-20 21:00:20.782132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:02.704 [2024-11-20 21:00:20.782139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.345 ms 00:18:02.704 [2024-11-20 21:00:20.782147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.782227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:02.704 [2024-11-20 21:00:20.782235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:02.704 [2024-11-20 21:00:20.782242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:02.704 [2024-11-20 21:00:20.782248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.787012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.787045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:02.704 [2024-11-20 21:00:20.787052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.787068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.787116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.787123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:02.704 [2024-11-20 21:00:20.787130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.787137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.787190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.787204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:02.704 [2024-11-20 21:00:20.787211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.787217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.787238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.787246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:02.704 [2024-11-20 21:00:20.787252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.787259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.795580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.795619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:02.704 [2024-11-20 21:00:20.795627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.795634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.802571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.802611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:02.704 [2024-11-20 21:00:20.802620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.802627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.802699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.802710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:02.704 [2024-11-20 21:00:20.802716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.802724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.802836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.802850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:02.704 [2024-11-20 21:00:20.802857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.802864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.802935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.802949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:02.704 [2024-11-20 21:00:20.802963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.802978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.803022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.803031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:02.704 [2024-11-20 21:00:20.803037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.803044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.803087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.803098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:02.704 [2024-11-20 21:00:20.803104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.803111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.803154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:02.704 [2024-11-20 21:00:20.803169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:02.704 [2024-11-20 21:00:20.803175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:02.704 [2024-11-20 21:00:20.803182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:02.704 [2024-11-20 21:00:20.803331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 40.487 ms, result 0 00:18:02.704 true 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86198 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86198 ']' 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86198 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86198 00:18:02.963 killing process with pid 86198 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86198' 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86198 00:18:02.963 21:00:20 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86198 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:09.540 21:00:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:09.540 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:09.540 fio-3.35 00:18:09.540 Starting 1 thread 00:18:14.827 00:18:14.827 test: (groupid=0, jobs=1): err= 0: pid=86358: Wed Nov 20 21:00:32 2024 00:18:14.827 read: IOPS=855, BW=56.8MiB/s (59.6MB/s)(255MiB/4480msec) 00:18:14.827 slat (nsec): min=3869, max=33512, avg=6784.61, stdev=3465.58 00:18:14.827 clat (usec): min=261, max=1454, avg=528.49, stdev=221.23 00:18:14.827 lat (usec): min=266, max=1475, avg=535.28, stdev=223.21 00:18:14.827 clat percentiles (usec): 00:18:14.827 | 1.00th=[ 322], 5.00th=[ 326], 10.00th=[ 330], 20.00th=[ 334], 00:18:14.827 | 30.00th=[ 343], 40.00th=[ 400], 50.00th=[ 469], 60.00th=[ 523], 00:18:14.827 | 70.00th=[ 586], 80.00th=[ 734], 90.00th=[ 906], 95.00th=[ 963], 00:18:14.827 | 99.00th=[ 1057], 99.50th=[ 1172], 99.90th=[ 1319], 99.95th=[ 1418], 00:18:14.827 | 99.99th=[ 1450] 00:18:14.827 write: IOPS=861, BW=57.2MiB/s (60.0MB/s)(256MiB/4477msec); 0 zone resets 00:18:14.827 slat (nsec): min=14562, max=66912, avg=21080.33, stdev=5426.44 00:18:14.827 clat (usec): min=307, max=1927, avg=595.25, stdev=255.70 00:18:14.827 lat (usec): min=324, max=1953, avg=616.34, stdev=258.93 00:18:14.827 clat percentiles (usec): 00:18:14.827 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 355], 20.00th=[ 359], 00:18:14.827 | 30.00th=[ 367], 40.00th=[ 453], 50.00th=[ 553], 60.00th=[ 586], 00:18:14.827 | 70.00th=[ 668], 80.00th=[ 881], 90.00th=[ 988], 95.00th=[ 1057], 00:18:14.827 | 99.00th=[ 1303], 99.50th=[ 1385], 99.90th=[ 1680], 99.95th=[ 1827], 00:18:14.827 | 99.99th=[ 1926] 00:18:14.827 bw ( KiB/s): min=38488, max=92752, per=96.66%, avg=56610.00, stdev=19353.92, samples=8 00:18:14.827 iops : min= 566, max= 1364, avg=832.50, stdev=284.62, samples=8 00:18:14.827 lat (usec) : 500=48.33%, 750=30.71%, 1000=15.68% 00:18:14.827 lat (msec) : 2=5.28% 00:18:14.827 cpu : usr=98.91%, sys=0.20%, ctx=5, majf=0, minf=1326 00:18:14.827 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:14.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:14.827 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:14.827 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:14.827 00:18:14.827 Run status group 0 (all jobs): 00:18:14.827 READ: bw=56.8MiB/s (59.6MB/s), 56.8MiB/s-56.8MiB/s (59.6MB/s-59.6MB/s), io=255MiB (267MB), run=4480-4480msec 00:18:14.827 WRITE: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=256MiB (269MB), run=4477-4477msec 00:18:15.769 ----------------------------------------------------- 00:18:15.770 Suppressions used: 00:18:15.770 count bytes template 00:18:15.770 1 5 /usr/src/fio/parse.c 00:18:15.770 1 8 libtcmalloc_minimal.so 00:18:15.770 1 904 libcrypto.so 00:18:15.770 ----------------------------------------------------- 00:18:15.770 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:15.770 21:00:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:15.770 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:15.770 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:15.770 fio-3.35 00:18:15.770 Starting 2 threads 00:18:42.341 00:18:42.341 first_half: (groupid=0, jobs=1): err= 0: pid=86461: Wed Nov 20 21:00:56 2024 00:18:42.341 read: IOPS=2939, BW=11.5MiB/s (12.0MB/s)(256MiB/22272msec) 00:18:42.341 slat (nsec): min=2996, max=38212, avg=4457.47, stdev=1060.89 00:18:42.341 clat (usec): min=463, max=286934, avg=36784.23, stdev=24545.22 00:18:42.341 lat (usec): min=467, max=286938, avg=36788.69, stdev=24545.43 00:18:42.341 clat percentiles (msec): 00:18:42.341 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:18:42.341 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:18:42.341 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 78], 00:18:42.341 | 99.00th=[ 155], 99.50th=[ 176], 99.90th=[ 255], 99.95th=[ 262], 00:18:42.341 | 99.99th=[ 279] 00:18:42.341 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(256MiB/22247msec); 0 zone resets 00:18:42.341 slat (usec): min=3, max=652, avg= 5.83, stdev= 3.91 00:18:42.341 clat (usec): min=364, max=61964, avg=6735.91, stdev=7494.83 00:18:42.341 lat (usec): min=373, max=61969, avg=6741.74, stdev=7495.01 00:18:42.341 clat percentiles (usec): 00:18:42.341 | 1.00th=[ 734], 5.00th=[ 881], 10.00th=[ 1172], 20.00th=[ 2311], 00:18:42.341 | 30.00th=[ 3064], 40.00th=[ 3916], 50.00th=[ 4752], 60.00th=[ 5407], 00:18:42.341 | 70.00th=[ 5997], 80.00th=[ 9765], 90.00th=[13173], 95.00th=[23987], 00:18:42.341 | 99.00th=[39060], 99.50th=[48497], 99.90th=[58459], 99.95th=[59507], 00:18:42.341 | 99.99th=[61080] 00:18:42.341 bw ( KiB/s): min= 48, max=52640, per=100.00%, avg=23673.82, stdev=16992.25, samples=22 00:18:42.341 iops : min= 12, max=13160, avg=5918.45, stdev=4248.06, samples=22 00:18:42.341 lat (usec) : 500=0.04%, 750=0.58%, 1000=3.10% 00:18:42.341 lat (msec) : 2=4.64%, 4=12.18%, 10=20.98%, 20=7.06%, 50=47.17% 00:18:42.341 lat (msec) : 100=2.40%, 250=1.78%, 500=0.06% 00:18:42.341 cpu : usr=99.35%, sys=0.08%, ctx=36, majf=0, minf=5597 00:18:42.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:42.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.341 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:42.341 issued rwts: total=65465,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:42.341 second_half: (groupid=0, jobs=1): err= 0: pid=86462: Wed Nov 20 21:00:56 2024 00:18:42.341 read: IOPS=2958, BW=11.6MiB/s (12.1MB/s)(256MiB/22133msec) 00:18:42.341 slat (nsec): min=3054, max=48044, avg=4946.33, stdev=990.68 00:18:42.341 clat (msec): min=10, max=325, avg=37.15, stdev=22.89 00:18:42.341 lat (msec): min=10, max=325, avg=37.15, stdev=22.89 00:18:42.341 clat percentiles (msec): 00:18:42.341 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 30], 00:18:42.341 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:18:42.341 | 70.00th=[ 35], 80.00th=[ 36], 90.00th=[ 44], 95.00th=[ 69], 00:18:42.341 | 99.00th=[ 146], 99.50th=[ 163], 99.90th=[ 305], 99.95th=[ 317], 00:18:42.341 | 99.99th=[ 326] 00:18:42.341 write: IOPS=2976, BW=11.6MiB/s (12.2MB/s)(256MiB/22016msec); 0 zone resets 00:18:42.341 slat (usec): min=3, max=280, avg= 6.13, stdev= 2.83 00:18:42.341 clat (usec): min=369, max=68119, avg=6087.22, stdev=5335.88 00:18:42.341 lat (usec): min=379, max=68126, avg=6093.35, stdev=5335.87 00:18:42.341 clat percentiles (usec): 00:18:42.341 | 1.00th=[ 766], 5.00th=[ 1598], 10.00th=[ 2409], 20.00th=[ 3097], 00:18:42.341 | 30.00th=[ 3752], 40.00th=[ 4359], 50.00th=[ 4948], 60.00th=[ 5407], 00:18:42.341 | 70.00th=[ 5800], 80.00th=[ 7963], 90.00th=[11863], 95.00th=[13173], 00:18:42.341 | 99.00th=[28967], 99.50th=[36439], 99.90th=[64226], 99.95th=[65274], 00:18:42.341 | 99.99th=[66847] 00:18:42.341 bw ( KiB/s): min= 3288, max=41800, per=100.00%, avg=24949.71, stdev=13540.08, samples=21 00:18:42.341 iops : min= 822, max=10450, avg=6237.43, stdev=3385.02, samples=21 00:18:42.341 lat (usec) : 500=0.02%, 750=0.39%, 1000=0.92% 00:18:42.341 lat (msec) : 2=2.03%, 4=13.45%, 10=25.16%, 20=7.30%, 50=46.31% 00:18:42.341 lat (msec) : 100=2.89%, 250=1.44%, 500=0.09% 00:18:42.341 cpu : usr=99.30%, sys=0.14%, ctx=31, majf=0, minf=5537 00:18:42.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:42.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:42.341 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:42.341 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:42.341 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:42.341 00:18:42.341 Run status group 0 (all jobs): 00:18:42.341 READ: bw=23.0MiB/s (24.1MB/s), 11.5MiB/s-11.6MiB/s (12.0MB/s-12.1MB/s), io=512MiB (536MB), run=22133-22272msec 00:18:42.341 WRITE: bw=23.0MiB/s (24.1MB/s), 11.5MiB/s-11.6MiB/s (12.1MB/s-12.2MB/s), io=512MiB (537MB), run=22016-22247msec 00:18:42.341 ----------------------------------------------------- 00:18:42.341 Suppressions used: 00:18:42.341 count bytes template 00:18:42.341 2 10 /usr/src/fio/parse.c 00:18:42.341 2 192 /usr/src/fio/iolog.c 00:18:42.341 1 8 libtcmalloc_minimal.so 00:18:42.341 1 904 libcrypto.so 00:18:42.341 ----------------------------------------------------- 00:18:42.341 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:42.341 21:00:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:42.341 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:42.341 fio-3.35 00:18:42.341 Starting 1 thread 00:18:57.256 00:18:57.256 test: (groupid=0, jobs=1): err= 0: pid=86747: Wed Nov 20 21:01:13 2024 00:18:57.256 read: IOPS=7831, BW=30.6MiB/s (32.1MB/s)(255MiB/8326msec) 00:18:57.256 slat (nsec): min=3039, max=51123, avg=4141.50, stdev=1776.71 00:18:57.256 clat (usec): min=479, max=33663, avg=16335.83, stdev=2665.78 00:18:57.256 lat (usec): min=483, max=33668, avg=16339.97, stdev=2666.75 00:18:57.256 clat percentiles (usec): 00:18:57.256 | 1.00th=[14353], 5.00th=[14484], 10.00th=[14615], 20.00th=[14746], 00:18:57.256 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15139], 60.00th=[15401], 00:18:57.256 | 70.00th=[15795], 80.00th=[17695], 90.00th=[20579], 95.00th=[22676], 00:18:57.256 | 99.00th=[25297], 99.50th=[26608], 99.90th=[30278], 99.95th=[32113], 00:18:57.256 | 99.99th=[33162] 00:18:57.256 write: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(256MiB/5741msec); 0 zone resets 00:18:57.256 slat (usec): min=4, max=575, avg= 7.19, stdev= 4.07 00:18:57.256 clat (usec): min=529, max=47219, avg=11163.01, stdev=11706.35 00:18:57.256 lat (usec): min=534, max=47225, avg=11170.20, stdev=11706.44 00:18:57.256 clat percentiles (usec): 00:18:57.256 | 1.00th=[ 709], 5.00th=[ 898], 10.00th=[ 1029], 20.00th=[ 1188], 00:18:57.256 | 30.00th=[ 1352], 40.00th=[ 1729], 50.00th=[ 8586], 60.00th=[11731], 00:18:57.256 | 70.00th=[14091], 80.00th=[16581], 90.00th=[34341], 95.00th=[36439], 00:18:57.256 | 99.00th=[39060], 99.50th=[40109], 99.90th=[44303], 99.95th=[44827], 00:18:57.256 | 99.99th=[45876] 00:18:57.256 bw ( KiB/s): min=25984, max=54384, per=95.68%, avg=43690.67, stdev=8468.58, samples=12 00:18:57.256 iops : min= 6496, max=13596, avg=10922.67, stdev=2117.14, samples=12 00:18:57.256 lat (usec) : 500=0.01%, 750=0.83%, 1000=3.53% 00:18:57.256 lat (msec) : 2=16.17%, 4=0.58%, 10=5.81%, 20=59.20%, 50=13.88% 00:18:57.256 cpu : usr=99.05%, sys=0.17%, ctx=23, majf=0, minf=5577 00:18:57.256 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:57.256 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.256 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.256 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.256 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.256 00:18:57.256 Run status group 0 (all jobs): 00:18:57.256 READ: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=255MiB (267MB), run=8326-8326msec 00:18:57.256 WRITE: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=256MiB (268MB), run=5741-5741msec 00:18:57.256 ----------------------------------------------------- 00:18:57.256 Suppressions used: 00:18:57.256 count bytes template 00:18:57.256 1 5 /usr/src/fio/parse.c 00:18:57.256 2 192 /usr/src/fio/iolog.c 00:18:57.256 1 8 libtcmalloc_minimal.so 00:18:57.256 1 904 libcrypto.so 00:18:57.256 ----------------------------------------------------- 00:18:57.256 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:57.256 Remove shared memory files 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69106 /dev/shm/spdk_tgt_trace.pid85136 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:57.256 00:18:57.256 real 1m0.587s 00:18:57.256 user 2m9.981s 00:18:57.256 sys 0m2.807s 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.256 21:01:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:57.256 ************************************ 00:18:57.256 END TEST ftl_fio_basic 00:18:57.256 ************************************ 00:18:57.256 21:01:14 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:57.256 21:01:14 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:57.256 21:01:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.256 21:01:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:57.256 ************************************ 00:18:57.256 START TEST ftl_bdevperf 00:18:57.256 ************************************ 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:57.256 * Looking for test storage... 00:18:57.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:57.256 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:57.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.257 --rc genhtml_branch_coverage=1 00:18:57.257 --rc genhtml_function_coverage=1 00:18:57.257 --rc genhtml_legend=1 00:18:57.257 --rc geninfo_all_blocks=1 00:18:57.257 --rc geninfo_unexecuted_blocks=1 00:18:57.257 00:18:57.257 ' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:57.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.257 --rc genhtml_branch_coverage=1 00:18:57.257 --rc genhtml_function_coverage=1 00:18:57.257 --rc genhtml_legend=1 00:18:57.257 --rc geninfo_all_blocks=1 00:18:57.257 --rc geninfo_unexecuted_blocks=1 00:18:57.257 00:18:57.257 ' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:57.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.257 --rc genhtml_branch_coverage=1 00:18:57.257 --rc genhtml_function_coverage=1 00:18:57.257 --rc genhtml_legend=1 00:18:57.257 --rc geninfo_all_blocks=1 00:18:57.257 --rc geninfo_unexecuted_blocks=1 00:18:57.257 00:18:57.257 ' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:57.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:57.257 --rc genhtml_branch_coverage=1 00:18:57.257 --rc genhtml_function_coverage=1 00:18:57.257 --rc genhtml_legend=1 00:18:57.257 --rc geninfo_all_blocks=1 00:18:57.257 --rc geninfo_unexecuted_blocks=1 00:18:57.257 00:18:57.257 ' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=86985 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 86985 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 86985 ']' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.257 21:01:14 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:57.257 [2024-11-20 21:01:14.797384] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:18:57.257 [2024-11-20 21:01:14.797508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86985 ] 00:18:57.257 [2024-11-20 21:01:14.941638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.257 [2024-11-20 21:01:14.961594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.518 21:01:15 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.518 21:01:15 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:57.518 21:01:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:57.518 21:01:15 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:57.518 21:01:15 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:57.518 21:01:15 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:57.519 21:01:15 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:57.519 21:01:15 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:57.780 21:01:15 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:57.780 21:01:15 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:58.041 21:01:15 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:58.041 21:01:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:58.041 21:01:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:58.041 21:01:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:58.041 21:01:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:58.041 21:01:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:58.041 21:01:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:58.041 { 00:18:58.041 "name": "nvme0n1", 00:18:58.041 "aliases": [ 00:18:58.041 "fc69863b-c313-498b-a386-af20c6d2ecfc" 00:18:58.041 ], 00:18:58.041 "product_name": "NVMe disk", 00:18:58.041 "block_size": 4096, 00:18:58.041 "num_blocks": 1310720, 00:18:58.041 "uuid": "fc69863b-c313-498b-a386-af20c6d2ecfc", 00:18:58.041 "numa_id": -1, 00:18:58.041 "assigned_rate_limits": { 00:18:58.041 "rw_ios_per_sec": 0, 00:18:58.041 "rw_mbytes_per_sec": 0, 00:18:58.041 "r_mbytes_per_sec": 0, 00:18:58.041 "w_mbytes_per_sec": 0 00:18:58.041 }, 00:18:58.041 "claimed": true, 00:18:58.041 "claim_type": "read_many_write_one", 00:18:58.041 "zoned": false, 00:18:58.041 "supported_io_types": { 00:18:58.041 "read": true, 00:18:58.041 "write": true, 00:18:58.041 "unmap": true, 00:18:58.041 "flush": true, 00:18:58.041 "reset": true, 00:18:58.041 "nvme_admin": true, 00:18:58.041 "nvme_io": true, 00:18:58.041 "nvme_io_md": false, 00:18:58.041 "write_zeroes": true, 00:18:58.041 "zcopy": false, 00:18:58.041 "get_zone_info": false, 00:18:58.041 "zone_management": false, 00:18:58.041 "zone_append": false, 00:18:58.041 "compare": true, 00:18:58.041 "compare_and_write": false, 00:18:58.041 "abort": true, 00:18:58.041 "seek_hole": false, 00:18:58.041 "seek_data": false, 00:18:58.041 "copy": true, 00:18:58.041 "nvme_iov_md": false 00:18:58.041 }, 00:18:58.041 "driver_specific": { 00:18:58.041 "nvme": [ 00:18:58.041 { 00:18:58.041 "pci_address": "0000:00:11.0", 00:18:58.042 "trid": { 00:18:58.042 "trtype": "PCIe", 00:18:58.042 "traddr": "0000:00:11.0" 00:18:58.042 }, 00:18:58.042 "ctrlr_data": { 00:18:58.042 "cntlid": 0, 00:18:58.042 "vendor_id": "0x1b36", 00:18:58.042 "model_number": "QEMU NVMe Ctrl", 00:18:58.042 "serial_number": "12341", 00:18:58.042 "firmware_revision": "8.0.0", 00:18:58.042 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:58.042 "oacs": { 00:18:58.042 "security": 0, 00:18:58.042 "format": 1, 00:18:58.042 "firmware": 0, 00:18:58.042 "ns_manage": 1 00:18:58.042 }, 00:18:58.042 "multi_ctrlr": false, 00:18:58.042 "ana_reporting": false 00:18:58.042 }, 00:18:58.042 "vs": { 00:18:58.042 "nvme_version": "1.4" 00:18:58.042 }, 00:18:58.042 "ns_data": { 00:18:58.042 "id": 1, 00:18:58.042 "can_share": false 00:18:58.042 } 00:18:58.042 } 00:18:58.042 ], 00:18:58.042 "mp_policy": "active_passive" 00:18:58.042 } 00:18:58.042 } 00:18:58.042 ]' 00:18:58.042 21:01:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:58.042 21:01:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:58.042 21:01:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=ad8d0361-6cfd-495d-a29a-6e7c2d24e0f4 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:58.304 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ad8d0361-6cfd-495d-a29a-6e7c2d24e0f4 00:18:58.564 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:58.825 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=7dc072ce-5524-4594-a559-439ab91c0be6 00:18:58.825 21:01:16 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 7dc072ce-5524-4594-a559-439ab91c0be6 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:59.087 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:18:59.349 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:59.349 { 00:18:59.349 "name": "8773ee4f-61a9-4c33-89ed-cbeacf7c54c3", 00:18:59.349 "aliases": [ 00:18:59.349 "lvs/nvme0n1p0" 00:18:59.349 ], 00:18:59.349 "product_name": "Logical Volume", 00:18:59.349 "block_size": 4096, 00:18:59.349 "num_blocks": 26476544, 00:18:59.349 "uuid": "8773ee4f-61a9-4c33-89ed-cbeacf7c54c3", 00:18:59.349 "assigned_rate_limits": { 00:18:59.349 "rw_ios_per_sec": 0, 00:18:59.349 "rw_mbytes_per_sec": 0, 00:18:59.349 "r_mbytes_per_sec": 0, 00:18:59.349 "w_mbytes_per_sec": 0 00:18:59.349 }, 00:18:59.349 "claimed": false, 00:18:59.349 "zoned": false, 00:18:59.349 "supported_io_types": { 00:18:59.349 "read": true, 00:18:59.349 "write": true, 00:18:59.349 "unmap": true, 00:18:59.349 "flush": false, 00:18:59.349 "reset": true, 00:18:59.349 "nvme_admin": false, 00:18:59.349 "nvme_io": false, 00:18:59.349 "nvme_io_md": false, 00:18:59.349 "write_zeroes": true, 00:18:59.349 "zcopy": false, 00:18:59.349 "get_zone_info": false, 00:18:59.349 "zone_management": false, 00:18:59.349 "zone_append": false, 00:18:59.349 "compare": false, 00:18:59.349 "compare_and_write": false, 00:18:59.349 "abort": false, 00:18:59.349 "seek_hole": true, 00:18:59.349 "seek_data": true, 00:18:59.349 "copy": false, 00:18:59.349 "nvme_iov_md": false 00:18:59.349 }, 00:18:59.349 "driver_specific": { 00:18:59.349 "lvol": { 00:18:59.349 "lvol_store_uuid": "7dc072ce-5524-4594-a559-439ab91c0be6", 00:18:59.349 "base_bdev": "nvme0n1", 00:18:59.349 "thin_provision": true, 00:18:59.349 "num_allocated_clusters": 0, 00:18:59.349 "snapshot": false, 00:18:59.349 "clone": false, 00:18:59.349 "esnap_clone": false 00:18:59.349 } 00:18:59.349 } 00:18:59.349 } 00:18:59.349 ]' 00:18:59.349 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:59.349 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:59.349 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:59.349 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:59.349 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:59.349 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:59.349 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:59.349 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:59.349 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:59.611 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:59.611 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:59.611 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:18:59.611 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:18:59.611 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:59.611 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:59.611 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:59.611 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:18:59.872 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:59.872 { 00:18:59.872 "name": "8773ee4f-61a9-4c33-89ed-cbeacf7c54c3", 00:18:59.872 "aliases": [ 00:18:59.872 "lvs/nvme0n1p0" 00:18:59.872 ], 00:18:59.872 "product_name": "Logical Volume", 00:18:59.872 "block_size": 4096, 00:18:59.872 "num_blocks": 26476544, 00:18:59.872 "uuid": "8773ee4f-61a9-4c33-89ed-cbeacf7c54c3", 00:18:59.872 "assigned_rate_limits": { 00:18:59.872 "rw_ios_per_sec": 0, 00:18:59.872 "rw_mbytes_per_sec": 0, 00:18:59.872 "r_mbytes_per_sec": 0, 00:18:59.872 "w_mbytes_per_sec": 0 00:18:59.872 }, 00:18:59.872 "claimed": false, 00:18:59.872 "zoned": false, 00:18:59.872 "supported_io_types": { 00:18:59.872 "read": true, 00:18:59.872 "write": true, 00:18:59.872 "unmap": true, 00:18:59.872 "flush": false, 00:18:59.872 "reset": true, 00:18:59.872 "nvme_admin": false, 00:18:59.872 "nvme_io": false, 00:18:59.872 "nvme_io_md": false, 00:18:59.872 "write_zeroes": true, 00:18:59.872 "zcopy": false, 00:18:59.872 "get_zone_info": false, 00:18:59.872 "zone_management": false, 00:18:59.872 "zone_append": false, 00:18:59.872 "compare": false, 00:18:59.872 "compare_and_write": false, 00:18:59.872 "abort": false, 00:18:59.872 "seek_hole": true, 00:18:59.872 "seek_data": true, 00:18:59.872 "copy": false, 00:18:59.872 "nvme_iov_md": false 00:18:59.872 }, 00:18:59.872 "driver_specific": { 00:18:59.872 "lvol": { 00:18:59.872 "lvol_store_uuid": "7dc072ce-5524-4594-a559-439ab91c0be6", 00:18:59.872 "base_bdev": "nvme0n1", 00:18:59.872 "thin_provision": true, 00:18:59.872 "num_allocated_clusters": 0, 00:18:59.872 "snapshot": false, 00:18:59.872 "clone": false, 00:18:59.872 "esnap_clone": false 00:18:59.872 } 00:18:59.872 } 00:18:59.872 } 00:18:59.872 ]' 00:18:59.872 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:59.872 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:59.872 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:59.872 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:59.872 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:59.872 21:01:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:59.872 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:59.872 21:01:17 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:00.134 21:01:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:00.134 21:01:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:19:00.134 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:19:00.134 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:00.134 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:00.134 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:00.134 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 00:19:00.395 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:00.395 { 00:19:00.395 "name": "8773ee4f-61a9-4c33-89ed-cbeacf7c54c3", 00:19:00.395 "aliases": [ 00:19:00.395 "lvs/nvme0n1p0" 00:19:00.395 ], 00:19:00.395 "product_name": "Logical Volume", 00:19:00.395 "block_size": 4096, 00:19:00.395 "num_blocks": 26476544, 00:19:00.395 "uuid": "8773ee4f-61a9-4c33-89ed-cbeacf7c54c3", 00:19:00.395 "assigned_rate_limits": { 00:19:00.395 "rw_ios_per_sec": 0, 00:19:00.395 "rw_mbytes_per_sec": 0, 00:19:00.395 "r_mbytes_per_sec": 0, 00:19:00.395 "w_mbytes_per_sec": 0 00:19:00.395 }, 00:19:00.395 "claimed": false, 00:19:00.395 "zoned": false, 00:19:00.395 "supported_io_types": { 00:19:00.395 "read": true, 00:19:00.395 "write": true, 00:19:00.395 "unmap": true, 00:19:00.395 "flush": false, 00:19:00.395 "reset": true, 00:19:00.395 "nvme_admin": false, 00:19:00.395 "nvme_io": false, 00:19:00.395 "nvme_io_md": false, 00:19:00.395 "write_zeroes": true, 00:19:00.395 "zcopy": false, 00:19:00.395 "get_zone_info": false, 00:19:00.395 "zone_management": false, 00:19:00.395 "zone_append": false, 00:19:00.395 "compare": false, 00:19:00.395 "compare_and_write": false, 00:19:00.395 "abort": false, 00:19:00.395 "seek_hole": true, 00:19:00.395 "seek_data": true, 00:19:00.395 "copy": false, 00:19:00.395 "nvme_iov_md": false 00:19:00.395 }, 00:19:00.395 "driver_specific": { 00:19:00.395 "lvol": { 00:19:00.395 "lvol_store_uuid": "7dc072ce-5524-4594-a559-439ab91c0be6", 00:19:00.395 "base_bdev": "nvme0n1", 00:19:00.395 "thin_provision": true, 00:19:00.395 "num_allocated_clusters": 0, 00:19:00.395 "snapshot": false, 00:19:00.395 "clone": false, 00:19:00.395 "esnap_clone": false 00:19:00.395 } 00:19:00.395 } 00:19:00.395 } 00:19:00.395 ]' 00:19:00.395 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:00.395 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:00.395 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:00.395 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:00.395 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:00.395 21:01:18 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:00.395 21:01:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:00.395 21:01:18 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8773ee4f-61a9-4c33-89ed-cbeacf7c54c3 -c nvc0n1p0 --l2p_dram_limit 20 00:19:00.658 [2024-11-20 21:01:18.549023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.658 [2024-11-20 21:01:18.549282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:00.658 [2024-11-20 21:01:18.549317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:00.658 [2024-11-20 21:01:18.549328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.658 [2024-11-20 21:01:18.549422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.658 [2024-11-20 21:01:18.549433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:00.658 [2024-11-20 21:01:18.549448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:00.658 [2024-11-20 21:01:18.549456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.658 [2024-11-20 21:01:18.549483] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:00.658 [2024-11-20 21:01:18.549904] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:00.658 [2024-11-20 21:01:18.549928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.658 [2024-11-20 21:01:18.549938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:00.658 [2024-11-20 21:01:18.549955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.456 ms 00:19:00.658 [2024-11-20 21:01:18.549964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.658 [2024-11-20 21:01:18.550152] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0a08d9f0-a83b-4682-9ea9-22c97d854f3d 00:19:00.658 [2024-11-20 21:01:18.552268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.658 [2024-11-20 21:01:18.552320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:00.658 [2024-11-20 21:01:18.552332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:00.658 [2024-11-20 21:01:18.552343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.658 [2024-11-20 21:01:18.562618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.658 [2024-11-20 21:01:18.562671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:00.658 [2024-11-20 21:01:18.562683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.208 ms 00:19:00.658 [2024-11-20 21:01:18.562695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.658 [2024-11-20 21:01:18.562799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.658 [2024-11-20 21:01:18.562813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:00.658 [2024-11-20 21:01:18.562822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:00.658 [2024-11-20 21:01:18.562855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.658 [2024-11-20 21:01:18.562942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.658 [2024-11-20 21:01:18.562959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:00.658 [2024-11-20 21:01:18.562968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:00.658 [2024-11-20 21:01:18.562979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.658 [2024-11-20 21:01:18.563001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:00.658 [2024-11-20 21:01:18.565344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.658 [2024-11-20 21:01:18.565389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:00.658 [2024-11-20 21:01:18.565404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.345 ms 00:19:00.658 [2024-11-20 21:01:18.565417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.658 [2024-11-20 21:01:18.565458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.658 [2024-11-20 21:01:18.565468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:00.658 [2024-11-20 21:01:18.565483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:00.658 [2024-11-20 21:01:18.565492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.658 [2024-11-20 21:01:18.565511] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:00.658 [2024-11-20 21:01:18.565668] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:00.658 [2024-11-20 21:01:18.565688] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:00.658 [2024-11-20 21:01:18.565700] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:00.659 [2024-11-20 21:01:18.565713] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:00.659 [2024-11-20 21:01:18.565722] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:00.659 [2024-11-20 21:01:18.565735] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:00.659 [2024-11-20 21:01:18.565744] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:00.659 [2024-11-20 21:01:18.565768] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:00.659 [2024-11-20 21:01:18.565779] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:00.659 [2024-11-20 21:01:18.565792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.659 [2024-11-20 21:01:18.565799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:00.659 [2024-11-20 21:01:18.565810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:19:00.659 [2024-11-20 21:01:18.565819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.659 [2024-11-20 21:01:18.565907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.659 [2024-11-20 21:01:18.565917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:00.659 [2024-11-20 21:01:18.565927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:00.659 [2024-11-20 21:01:18.565935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.659 [2024-11-20 21:01:18.566030] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:00.659 [2024-11-20 21:01:18.566043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:00.659 [2024-11-20 21:01:18.566055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:00.659 [2024-11-20 21:01:18.566063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:00.659 [2024-11-20 21:01:18.566080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:00.659 [2024-11-20 21:01:18.566096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:00.659 [2024-11-20 21:01:18.566104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:00.659 [2024-11-20 21:01:18.566121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:00.659 [2024-11-20 21:01:18.566128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:00.659 [2024-11-20 21:01:18.566140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:00.659 [2024-11-20 21:01:18.566147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:00.659 [2024-11-20 21:01:18.566156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:00.659 [2024-11-20 21:01:18.566163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:00.659 [2024-11-20 21:01:18.566181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:00.659 [2024-11-20 21:01:18.566190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:00.659 [2024-11-20 21:01:18.566207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.659 [2024-11-20 21:01:18.566224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:00.659 [2024-11-20 21:01:18.566232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.659 [2024-11-20 21:01:18.566248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:00.659 [2024-11-20 21:01:18.566257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.659 [2024-11-20 21:01:18.566278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:00.659 [2024-11-20 21:01:18.566286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:00.659 [2024-11-20 21:01:18.566317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:00.659 [2024-11-20 21:01:18.566327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:00.659 [2024-11-20 21:01:18.566344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:00.659 [2024-11-20 21:01:18.566352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:00.659 [2024-11-20 21:01:18.566363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:00.659 [2024-11-20 21:01:18.566370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:00.659 [2024-11-20 21:01:18.566378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:00.659 [2024-11-20 21:01:18.566385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:00.659 [2024-11-20 21:01:18.566400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:00.659 [2024-11-20 21:01:18.566409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566417] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:00.659 [2024-11-20 21:01:18.566430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:00.659 [2024-11-20 21:01:18.566439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:00.659 [2024-11-20 21:01:18.566450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:00.659 [2024-11-20 21:01:18.566459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:00.659 [2024-11-20 21:01:18.566470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:00.659 [2024-11-20 21:01:18.566478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:00.659 [2024-11-20 21:01:18.566488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:00.659 [2024-11-20 21:01:18.566496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:00.659 [2024-11-20 21:01:18.566505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:00.659 [2024-11-20 21:01:18.566517] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:00.659 [2024-11-20 21:01:18.566532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:00.659 [2024-11-20 21:01:18.566542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:00.659 [2024-11-20 21:01:18.566553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:00.659 [2024-11-20 21:01:18.566562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:00.659 [2024-11-20 21:01:18.566572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:00.659 [2024-11-20 21:01:18.566580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:00.659 [2024-11-20 21:01:18.566596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:00.659 [2024-11-20 21:01:18.566605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:00.659 [2024-11-20 21:01:18.566614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:00.659 [2024-11-20 21:01:18.566621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:00.659 [2024-11-20 21:01:18.566638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:00.659 [2024-11-20 21:01:18.566645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:00.659 [2024-11-20 21:01:18.566654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:00.659 [2024-11-20 21:01:18.566661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:00.659 [2024-11-20 21:01:18.566671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:00.659 [2024-11-20 21:01:18.566678] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:00.659 [2024-11-20 21:01:18.566688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:00.659 [2024-11-20 21:01:18.566698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:00.659 [2024-11-20 21:01:18.566707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:00.659 [2024-11-20 21:01:18.566714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:00.659 [2024-11-20 21:01:18.566723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:00.659 [2024-11-20 21:01:18.566731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.659 [2024-11-20 21:01:18.566743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:00.659 [2024-11-20 21:01:18.566766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:19:00.659 [2024-11-20 21:01:18.566777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.659 [2024-11-20 21:01:18.566833] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:00.659 [2024-11-20 21:01:18.566857] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:03.967 [2024-11-20 21:01:21.971277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.967 [2024-11-20 21:01:21.971340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:03.967 [2024-11-20 21:01:21.971355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3404.431 ms 00:19:03.967 [2024-11-20 21:01:21.971371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.967 [2024-11-20 21:01:21.980077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.967 [2024-11-20 21:01:21.980125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:03.967 [2024-11-20 21:01:21.980137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.627 ms 00:19:03.967 [2024-11-20 21:01:21.980149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.967 [2024-11-20 21:01:21.980236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.967 [2024-11-20 21:01:21.980247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:03.967 [2024-11-20 21:01:21.980256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:03.967 [2024-11-20 21:01:21.980268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.967 [2024-11-20 21:01:22.009302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.967 [2024-11-20 21:01:22.009361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:03.967 [2024-11-20 21:01:22.009377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.987 ms 00:19:03.967 [2024-11-20 21:01:22.009391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.967 [2024-11-20 21:01:22.009432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.967 [2024-11-20 21:01:22.009447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:03.967 [2024-11-20 21:01:22.009462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:03.967 [2024-11-20 21:01:22.009474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.967 [2024-11-20 21:01:22.009891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.967 [2024-11-20 21:01:22.009938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:03.967 [2024-11-20 21:01:22.009952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:19:03.967 [2024-11-20 21:01:22.009976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.967 [2024-11-20 21:01:22.010128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.967 [2024-11-20 21:01:22.010147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:03.967 [2024-11-20 21:01:22.010159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:03.967 [2024-11-20 21:01:22.010175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.967 [2024-11-20 21:01:22.016087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.967 [2024-11-20 21:01:22.016128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:03.967 [2024-11-20 21:01:22.016141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.890 ms 00:19:03.967 [2024-11-20 21:01:22.016153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:03.967 [2024-11-20 21:01:22.024841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:03.967 [2024-11-20 21:01:22.030068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:03.967 [2024-11-20 21:01:22.030097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:03.967 [2024-11-20 21:01:22.030109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.838 ms 00:19:03.967 [2024-11-20 21:01:22.030117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.091864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.092031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:04.229 [2024-11-20 21:01:22.092055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.721 ms 00:19:04.229 [2024-11-20 21:01:22.092063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.092241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.092252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.229 [2024-11-20 21:01:22.092262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:19:04.229 [2024-11-20 21:01:22.092269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.095705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.095841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:04.229 [2024-11-20 21:01:22.095865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:19:04.229 [2024-11-20 21:01:22.095876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.099434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.099547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:04.229 [2024-11-20 21:01:22.099566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.489 ms 00:19:04.229 [2024-11-20 21:01:22.099574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.099883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.099894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.229 [2024-11-20 21:01:22.099906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:19:04.229 [2024-11-20 21:01:22.099916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.133639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.133782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:04.229 [2024-11-20 21:01:22.133802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.696 ms 00:19:04.229 [2024-11-20 21:01:22.133810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.138817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.138854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:04.229 [2024-11-20 21:01:22.138866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.942 ms 00:19:04.229 [2024-11-20 21:01:22.138874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.142875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.142907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:04.229 [2024-11-20 21:01:22.142918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:19:04.229 [2024-11-20 21:01:22.142925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.147681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.147715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:04.229 [2024-11-20 21:01:22.147728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.720 ms 00:19:04.229 [2024-11-20 21:01:22.147736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.147795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.147805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:04.229 [2024-11-20 21:01:22.147818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:04.229 [2024-11-20 21:01:22.147825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.147887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.229 [2024-11-20 21:01:22.147896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:04.229 [2024-11-20 21:01:22.147906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:04.229 [2024-11-20 21:01:22.147914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.229 [2024-11-20 21:01:22.148801] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3599.462 ms, result 0 00:19:04.229 { 00:19:04.229 "name": "ftl0", 00:19:04.229 "uuid": "0a08d9f0-a83b-4682-9ea9-22c97d854f3d" 00:19:04.229 } 00:19:04.229 21:01:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:04.229 21:01:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:04.229 21:01:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:04.490 21:01:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:04.490 [2024-11-20 21:01:22.477159] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:04.490 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:04.490 Zero copy mechanism will not be used. 00:19:04.490 Running I/O for 4 seconds... 00:19:06.380 1054.00 IOPS, 69.99 MiB/s [2024-11-20T21:01:25.887Z] 871.50 IOPS, 57.87 MiB/s [2024-11-20T21:01:26.831Z] 930.33 IOPS, 61.78 MiB/s [2024-11-20T21:01:26.831Z] 1012.25 IOPS, 67.22 MiB/s 00:19:08.712 Latency(us) 00:19:08.712 [2024-11-20T21:01:26.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.712 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:08.712 ftl0 : 4.00 1012.12 67.21 0.00 0.00 1038.38 173.29 3604.48 00:19:08.712 [2024-11-20T21:01:26.831Z] =================================================================================================================== 00:19:08.712 [2024-11-20T21:01:26.831Z] Total : 1012.12 67.21 0.00 0.00 1038.38 173.29 3604.48 00:19:08.712 [2024-11-20 21:01:26.484874] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:08.712 { 00:19:08.712 "results": [ 00:19:08.712 { 00:19:08.712 "job": "ftl0", 00:19:08.712 "core_mask": "0x1", 00:19:08.712 "workload": "randwrite", 00:19:08.712 "status": "finished", 00:19:08.712 "queue_depth": 1, 00:19:08.712 "io_size": 69632, 00:19:08.712 "runtime": 4.001491, 00:19:08.712 "iops": 1012.1227312519259, 00:19:08.712 "mibps": 67.2112751221982, 00:19:08.712 "io_failed": 0, 00:19:08.712 "io_timeout": 0, 00:19:08.712 "avg_latency_us": 1038.3832615384615, 00:19:08.712 "min_latency_us": 173.2923076923077, 00:19:08.712 "max_latency_us": 3604.48 00:19:08.712 } 00:19:08.712 ], 00:19:08.712 "core_count": 1 00:19:08.712 } 00:19:08.712 21:01:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:08.712 [2024-11-20 21:01:26.589383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:08.712 Running I/O for 4 seconds... 00:19:10.643 10397.00 IOPS, 40.61 MiB/s [2024-11-20T21:01:29.703Z] 7949.50 IOPS, 31.05 MiB/s [2024-11-20T21:01:30.645Z] 7099.67 IOPS, 27.73 MiB/s [2024-11-20T21:01:30.645Z] 6568.00 IOPS, 25.66 MiB/s 00:19:12.526 Latency(us) 00:19:12.526 [2024-11-20T21:01:30.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.526 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:12.526 ftl0 : 4.03 6551.47 25.59 0.00 0.00 19464.86 199.29 46580.97 00:19:12.526 [2024-11-20T21:01:30.645Z] =================================================================================================================== 00:19:12.526 [2024-11-20T21:01:30.645Z] Total : 6551.47 25.59 0.00 0.00 19464.86 0.00 46580.97 00:19:12.526 [2024-11-20 21:01:30.625296] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:19:12.526 "results": [ 00:19:12.526 { 00:19:12.526 "job": "ftl0", 00:19:12.526 "core_mask": "0x1", 00:19:12.526 "workload": "randwrite", 00:19:12.526 "status": "finished", 00:19:12.526 "queue_depth": 128, 00:19:12.526 "io_size": 4096, 00:19:12.526 "runtime": 4.029629, 00:19:12.526 "iops": 6551.4716119027335, 00:19:12.526 "mibps": 25.591685983995053, 00:19:12.526 "io_failed": 0, 00:19:12.526 "io_timeout": 0, 00:19:12.526 "avg_latency_us": 19464.862657342655, 00:19:12.526 "min_latency_us": 199.28615384615384, 00:19:12.526 "max_latency_us": 46580.97230769231 00:19:12.526 } 00:19:12.526 ], 00:19:12.526 "core_count": 1 00:19:12.526 } 00:19:12.526 l0 00:19:12.788 21:01:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:12.788 [2024-11-20 21:01:30.746692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:12.788 Running I/O for 4 seconds... 00:19:14.681 4722.00 IOPS, 18.45 MiB/s [2024-11-20T21:01:34.188Z] 4928.50 IOPS, 19.25 MiB/s [2024-11-20T21:01:34.761Z] 5091.00 IOPS, 19.89 MiB/s [2024-11-20T21:01:35.023Z] 5039.50 IOPS, 19.69 MiB/s 00:19:16.904 Latency(us) 00:19:16.904 [2024-11-20T21:01:35.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.904 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:16.904 Verification LBA range: start 0x0 length 0x1400000 00:19:16.904 ftl0 : 4.01 5053.26 19.74 0.00 0.00 25257.53 296.17 36700.16 00:19:16.904 [2024-11-20T21:01:35.023Z] =================================================================================================================== 00:19:16.904 [2024-11-20T21:01:35.023Z] Total : 5053.26 19.74 0.00 0.00 25257.53 0.00 36700.16 00:19:16.904 [2024-11-20 21:01:34.770179] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:16.904 { 00:19:16.904 "results": [ 00:19:16.904 { 00:19:16.904 "job": "ftl0", 00:19:16.904 "core_mask": "0x1", 00:19:16.904 "workload": "verify", 00:19:16.904 "status": "finished", 00:19:16.904 "verify_range": { 00:19:16.904 "start": 0, 00:19:16.904 "length": 20971520 00:19:16.904 }, 00:19:16.904 "queue_depth": 128, 00:19:16.904 "io_size": 4096, 00:19:16.904 "runtime": 4.013649, 00:19:16.904 "iops": 5053.257023720808, 00:19:16.904 "mibps": 19.739285248909407, 00:19:16.904 "io_failed": 0, 00:19:16.904 "io_timeout": 0, 00:19:16.904 "avg_latency_us": 25257.53234106787, 00:19:16.904 "min_latency_us": 296.1723076923077, 00:19:16.904 "max_latency_us": 36700.16 00:19:16.905 } 00:19:16.905 ], 00:19:16.905 "core_count": 1 00:19:16.905 } 00:19:16.905 21:01:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:16.905 [2024-11-20 21:01:34.986608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.905 [2024-11-20 21:01:34.986668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:16.905 [2024-11-20 21:01:34.986689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:16.905 [2024-11-20 21:01:34.986699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.905 [2024-11-20 21:01:34.986724] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:16.905 [2024-11-20 21:01:34.987506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.905 [2024-11-20 21:01:34.987566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:16.905 [2024-11-20 21:01:34.987579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.765 ms 00:19:16.905 [2024-11-20 21:01:34.987591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.905 [2024-11-20 21:01:34.991056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:16.905 [2024-11-20 21:01:34.991248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:16.905 [2024-11-20 21:01:34.991269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.437 ms 00:19:16.905 [2024-11-20 21:01:34.991285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-11-20 21:01:35.217024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-11-20 21:01:35.217100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:17.169 [2024-11-20 21:01:35.217117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 225.713 ms 00:19:17.169 [2024-11-20 21:01:35.217133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-11-20 21:01:35.223342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-11-20 21:01:35.223396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:17.169 [2024-11-20 21:01:35.223410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.164 ms 00:19:17.169 [2024-11-20 21:01:35.223421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-11-20 21:01:35.226449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-11-20 21:01:35.226645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:17.169 [2024-11-20 21:01:35.226666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:19:17.169 [2024-11-20 21:01:35.226681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-11-20 21:01:35.232671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-11-20 21:01:35.232765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:17.169 [2024-11-20 21:01:35.232777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.896 ms 00:19:17.169 [2024-11-20 21:01:35.232791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-11-20 21:01:35.232933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-11-20 21:01:35.232948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:17.169 [2024-11-20 21:01:35.232958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:17.169 [2024-11-20 21:01:35.232968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.169 [2024-11-20 21:01:35.236163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.169 [2024-11-20 21:01:35.236367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:17.169 [2024-11-20 21:01:35.236387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.177 ms 00:19:17.169 [2024-11-20 21:01:35.236397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-11-20 21:01:35.239275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-11-20 21:01:35.239337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:17.170 [2024-11-20 21:01:35.239347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.759 ms 00:19:17.170 [2024-11-20 21:01:35.239356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-11-20 21:01:35.241673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-11-20 21:01:35.241873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:17.170 [2024-11-20 21:01:35.242066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.271 ms 00:19:17.170 [2024-11-20 21:01:35.242119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-11-20 21:01:35.244232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.170 [2024-11-20 21:01:35.244406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:17.170 [2024-11-20 21:01:35.244423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.028 ms 00:19:17.170 [2024-11-20 21:01:35.244433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.170 [2024-11-20 21:01:35.244500] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:17.170 [2024-11-20 21:01:35.244520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.244994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:17.170 [2024-11-20 21:01:35.245224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:17.171 [2024-11-20 21:01:35.245455] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:17.171 [2024-11-20 21:01:35.245473] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0a08d9f0-a83b-4682-9ea9-22c97d854f3d 00:19:17.171 [2024-11-20 21:01:35.245484] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:17.171 [2024-11-20 21:01:35.245492] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:17.171 [2024-11-20 21:01:35.245508] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:17.171 [2024-11-20 21:01:35.245517] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:17.171 [2024-11-20 21:01:35.245528] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:17.171 [2024-11-20 21:01:35.245537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:17.171 [2024-11-20 21:01:35.245547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:17.171 [2024-11-20 21:01:35.245553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:17.171 [2024-11-20 21:01:35.245562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:17.171 [2024-11-20 21:01:35.245570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.171 [2024-11-20 21:01:35.245579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:17.171 [2024-11-20 21:01:35.245592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:19:17.171 [2024-11-20 21:01:35.245602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.247953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.171 [2024-11-20 21:01:35.247988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:17.171 [2024-11-20 21:01:35.247998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.328 ms 00:19:17.171 [2024-11-20 21:01:35.248009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.248128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.171 [2024-11-20 21:01:35.248139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:17.171 [2024-11-20 21:01:35.248148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:17.171 [2024-11-20 21:01:35.248163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.255914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.255972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.171 [2024-11-20 21:01:35.255983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.255993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.256057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.256068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.171 [2024-11-20 21:01:35.256076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.256090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.256165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.256178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.171 [2024-11-20 21:01:35.256186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.256196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.256211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.256222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.171 [2024-11-20 21:01:35.256235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.256251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.270044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.270108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.171 [2024-11-20 21:01:35.270120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.270130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.280892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.280948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.171 [2024-11-20 21:01:35.280959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.280973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.281041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.281055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.171 [2024-11-20 21:01:35.281064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.281074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.281138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.281150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.171 [2024-11-20 21:01:35.281159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.281172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.281247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.281259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.171 [2024-11-20 21:01:35.281267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.281277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.281306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.281318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:17.171 [2024-11-20 21:01:35.281326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.281336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.281376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.281388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.171 [2024-11-20 21:01:35.281396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.281412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.281460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:17.171 [2024-11-20 21:01:35.281473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.171 [2024-11-20 21:01:35.281481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:17.171 [2024-11-20 21:01:35.281493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.171 [2024-11-20 21:01:35.281637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 294.990 ms, result 0 00:19:17.433 true 00:19:17.433 21:01:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 86985 00:19:17.433 21:01:35 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 86985 ']' 00:19:17.433 21:01:35 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 86985 00:19:17.433 21:01:35 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:17.433 21:01:35 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:17.433 21:01:35 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86985 00:19:17.433 killing process with pid 86985 00:19:17.433 Received shutdown signal, test time was about 4.000000 seconds 00:19:17.433 00:19:17.433 Latency(us) 00:19:17.433 [2024-11-20T21:01:35.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.434 [2024-11-20T21:01:35.553Z] =================================================================================================================== 00:19:17.434 [2024-11-20T21:01:35.553Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:17.434 21:01:35 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:17.434 21:01:35 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:17.434 21:01:35 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86985' 00:19:17.434 21:01:35 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 86985 00:19:17.434 21:01:35 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 86985 00:19:20.740 Remove shared memory files 00:19:20.740 21:01:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:20.740 21:01:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:20.740 21:01:38 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:20.740 21:01:38 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:20.740 21:01:38 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:20.740 21:01:38 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:20.740 21:01:38 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:20.740 21:01:38 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:20.740 ************************************ 00:19:20.740 END TEST ftl_bdevperf 00:19:20.740 ************************************ 00:19:20.740 00:19:20.740 real 0m24.032s 00:19:20.740 user 0m26.668s 00:19:20.740 sys 0m0.913s 00:19:20.740 21:01:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.740 21:01:38 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:20.740 21:01:38 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:20.740 21:01:38 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:20.740 21:01:38 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.740 21:01:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:20.740 ************************************ 00:19:20.740 START TEST ftl_trim 00:19:20.740 ************************************ 00:19:20.740 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:20.740 * Looking for test storage... 00:19:20.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:20.740 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.740 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.740 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.740 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.740 21:01:38 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.741 --rc genhtml_branch_coverage=1 00:19:20.741 --rc genhtml_function_coverage=1 00:19:20.741 --rc genhtml_legend=1 00:19:20.741 --rc geninfo_all_blocks=1 00:19:20.741 --rc geninfo_unexecuted_blocks=1 00:19:20.741 00:19:20.741 ' 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.741 --rc genhtml_branch_coverage=1 00:19:20.741 --rc genhtml_function_coverage=1 00:19:20.741 --rc genhtml_legend=1 00:19:20.741 --rc geninfo_all_blocks=1 00:19:20.741 --rc geninfo_unexecuted_blocks=1 00:19:20.741 00:19:20.741 ' 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.741 --rc genhtml_branch_coverage=1 00:19:20.741 --rc genhtml_function_coverage=1 00:19:20.741 --rc genhtml_legend=1 00:19:20.741 --rc geninfo_all_blocks=1 00:19:20.741 --rc geninfo_unexecuted_blocks=1 00:19:20.741 00:19:20.741 ' 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.741 --rc genhtml_branch_coverage=1 00:19:20.741 --rc genhtml_function_coverage=1 00:19:20.741 --rc genhtml_legend=1 00:19:20.741 --rc geninfo_all_blocks=1 00:19:20.741 --rc geninfo_unexecuted_blocks=1 00:19:20.741 00:19:20.741 ' 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:20.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87329 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87329 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87329 ']' 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.741 21:01:38 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:20.741 21:01:38 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:21.003 [2024-11-20 21:01:38.927209] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:19:21.003 [2024-11-20 21:01:38.927600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87329 ] 00:19:21.003 [2024-11-20 21:01:39.076548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:21.272 [2024-11-20 21:01:39.121071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.272 [2024-11-20 21:01:39.121223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.272 [2024-11-20 21:01:39.121363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.848 21:01:39 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.848 21:01:39 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:21.848 21:01:39 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:21.848 21:01:39 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:21.848 21:01:39 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:21.848 21:01:39 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:21.848 21:01:39 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:21.848 21:01:39 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:22.107 21:01:40 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:22.107 21:01:40 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:22.107 21:01:40 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:22.107 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:22.107 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:22.107 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:22.107 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:22.107 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:22.367 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:22.367 { 00:19:22.367 "name": "nvme0n1", 00:19:22.367 "aliases": [ 00:19:22.367 "3cd7f4ac-4260-4919-a90e-f9029911d0ba" 00:19:22.367 ], 00:19:22.367 "product_name": "NVMe disk", 00:19:22.367 "block_size": 4096, 00:19:22.367 "num_blocks": 1310720, 00:19:22.367 "uuid": "3cd7f4ac-4260-4919-a90e-f9029911d0ba", 00:19:22.367 "numa_id": -1, 00:19:22.367 "assigned_rate_limits": { 00:19:22.367 "rw_ios_per_sec": 0, 00:19:22.367 "rw_mbytes_per_sec": 0, 00:19:22.367 "r_mbytes_per_sec": 0, 00:19:22.367 "w_mbytes_per_sec": 0 00:19:22.367 }, 00:19:22.367 "claimed": true, 00:19:22.367 "claim_type": "read_many_write_one", 00:19:22.367 "zoned": false, 00:19:22.367 "supported_io_types": { 00:19:22.367 "read": true, 00:19:22.367 "write": true, 00:19:22.367 "unmap": true, 00:19:22.367 "flush": true, 00:19:22.367 "reset": true, 00:19:22.367 "nvme_admin": true, 00:19:22.367 "nvme_io": true, 00:19:22.367 "nvme_io_md": false, 00:19:22.367 "write_zeroes": true, 00:19:22.367 "zcopy": false, 00:19:22.367 "get_zone_info": false, 00:19:22.367 "zone_management": false, 00:19:22.367 "zone_append": false, 00:19:22.367 "compare": true, 00:19:22.367 "compare_and_write": false, 00:19:22.367 "abort": true, 00:19:22.367 "seek_hole": false, 00:19:22.367 "seek_data": false, 00:19:22.367 "copy": true, 00:19:22.367 "nvme_iov_md": false 00:19:22.367 }, 00:19:22.367 "driver_specific": { 00:19:22.367 "nvme": [ 00:19:22.367 { 00:19:22.367 "pci_address": "0000:00:11.0", 00:19:22.367 "trid": { 00:19:22.367 "trtype": "PCIe", 00:19:22.367 "traddr": "0000:00:11.0" 00:19:22.367 }, 00:19:22.367 "ctrlr_data": { 00:19:22.367 "cntlid": 0, 00:19:22.367 "vendor_id": "0x1b36", 00:19:22.367 "model_number": "QEMU NVMe Ctrl", 00:19:22.367 "serial_number": "12341", 00:19:22.367 "firmware_revision": "8.0.0", 00:19:22.367 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:22.367 "oacs": { 00:19:22.367 "security": 0, 00:19:22.367 "format": 1, 00:19:22.367 "firmware": 0, 00:19:22.367 "ns_manage": 1 00:19:22.367 }, 00:19:22.367 "multi_ctrlr": false, 00:19:22.367 "ana_reporting": false 00:19:22.367 }, 00:19:22.367 "vs": { 00:19:22.367 "nvme_version": "1.4" 00:19:22.367 }, 00:19:22.367 "ns_data": { 00:19:22.367 "id": 1, 00:19:22.367 "can_share": false 00:19:22.367 } 00:19:22.367 } 00:19:22.367 ], 00:19:22.367 "mp_policy": "active_passive" 00:19:22.367 } 00:19:22.367 } 00:19:22.367 ]' 00:19:22.367 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:22.367 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:22.367 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:22.367 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:22.367 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:22.367 21:01:40 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:22.367 21:01:40 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:22.367 21:01:40 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:22.367 21:01:40 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:22.367 21:01:40 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:22.367 21:01:40 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:22.628 21:01:40 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=7dc072ce-5524-4594-a559-439ab91c0be6 00:19:22.628 21:01:40 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:22.628 21:01:40 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7dc072ce-5524-4594-a559-439ab91c0be6 00:19:22.888 21:01:40 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:23.149 21:01:41 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a4d13566-26f7-4efb-8ec4-fa35e95f21aa 00:19:23.149 21:01:41 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a4d13566-26f7-4efb-8ec4-fa35e95f21aa 00:19:23.408 21:01:41 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:23.408 21:01:41 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:23.408 21:01:41 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:23.408 21:01:41 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:23.408 21:01:41 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:23.408 21:01:41 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:23.408 21:01:41 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:23.408 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:23.408 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:23.408 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:23.408 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:23.408 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:23.666 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:23.667 { 00:19:23.667 "name": "721e3a99-e634-451a-a5d6-a10c41fdd01b", 00:19:23.667 "aliases": [ 00:19:23.667 "lvs/nvme0n1p0" 00:19:23.667 ], 00:19:23.667 "product_name": "Logical Volume", 00:19:23.667 "block_size": 4096, 00:19:23.667 "num_blocks": 26476544, 00:19:23.667 "uuid": "721e3a99-e634-451a-a5d6-a10c41fdd01b", 00:19:23.667 "assigned_rate_limits": { 00:19:23.667 "rw_ios_per_sec": 0, 00:19:23.667 "rw_mbytes_per_sec": 0, 00:19:23.667 "r_mbytes_per_sec": 0, 00:19:23.667 "w_mbytes_per_sec": 0 00:19:23.667 }, 00:19:23.667 "claimed": false, 00:19:23.667 "zoned": false, 00:19:23.667 "supported_io_types": { 00:19:23.667 "read": true, 00:19:23.667 "write": true, 00:19:23.667 "unmap": true, 00:19:23.667 "flush": false, 00:19:23.667 "reset": true, 00:19:23.667 "nvme_admin": false, 00:19:23.667 "nvme_io": false, 00:19:23.667 "nvme_io_md": false, 00:19:23.667 "write_zeroes": true, 00:19:23.667 "zcopy": false, 00:19:23.667 "get_zone_info": false, 00:19:23.667 "zone_management": false, 00:19:23.667 "zone_append": false, 00:19:23.667 "compare": false, 00:19:23.667 "compare_and_write": false, 00:19:23.667 "abort": false, 00:19:23.667 "seek_hole": true, 00:19:23.667 "seek_data": true, 00:19:23.667 "copy": false, 00:19:23.667 "nvme_iov_md": false 00:19:23.667 }, 00:19:23.667 "driver_specific": { 00:19:23.667 "lvol": { 00:19:23.667 "lvol_store_uuid": "a4d13566-26f7-4efb-8ec4-fa35e95f21aa", 00:19:23.667 "base_bdev": "nvme0n1", 00:19:23.667 "thin_provision": true, 00:19:23.667 "num_allocated_clusters": 0, 00:19:23.667 "snapshot": false, 00:19:23.667 "clone": false, 00:19:23.667 "esnap_clone": false 00:19:23.667 } 00:19:23.667 } 00:19:23.667 } 00:19:23.667 ]' 00:19:23.667 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:23.667 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:23.667 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:23.667 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:23.667 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:23.667 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:23.667 21:01:41 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:23.667 21:01:41 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:23.667 21:01:41 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:23.926 21:01:41 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:23.926 21:01:41 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:23.926 21:01:41 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:23.926 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:23.926 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:23.926 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:23.926 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:23.926 21:01:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:24.184 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:24.184 { 00:19:24.184 "name": "721e3a99-e634-451a-a5d6-a10c41fdd01b", 00:19:24.184 "aliases": [ 00:19:24.184 "lvs/nvme0n1p0" 00:19:24.184 ], 00:19:24.184 "product_name": "Logical Volume", 00:19:24.184 "block_size": 4096, 00:19:24.184 "num_blocks": 26476544, 00:19:24.184 "uuid": "721e3a99-e634-451a-a5d6-a10c41fdd01b", 00:19:24.184 "assigned_rate_limits": { 00:19:24.184 "rw_ios_per_sec": 0, 00:19:24.184 "rw_mbytes_per_sec": 0, 00:19:24.184 "r_mbytes_per_sec": 0, 00:19:24.184 "w_mbytes_per_sec": 0 00:19:24.184 }, 00:19:24.184 "claimed": false, 00:19:24.184 "zoned": false, 00:19:24.184 "supported_io_types": { 00:19:24.185 "read": true, 00:19:24.185 "write": true, 00:19:24.185 "unmap": true, 00:19:24.185 "flush": false, 00:19:24.185 "reset": true, 00:19:24.185 "nvme_admin": false, 00:19:24.185 "nvme_io": false, 00:19:24.185 "nvme_io_md": false, 00:19:24.185 "write_zeroes": true, 00:19:24.185 "zcopy": false, 00:19:24.185 "get_zone_info": false, 00:19:24.185 "zone_management": false, 00:19:24.185 "zone_append": false, 00:19:24.185 "compare": false, 00:19:24.185 "compare_and_write": false, 00:19:24.185 "abort": false, 00:19:24.185 "seek_hole": true, 00:19:24.185 "seek_data": true, 00:19:24.185 "copy": false, 00:19:24.185 "nvme_iov_md": false 00:19:24.185 }, 00:19:24.185 "driver_specific": { 00:19:24.185 "lvol": { 00:19:24.185 "lvol_store_uuid": "a4d13566-26f7-4efb-8ec4-fa35e95f21aa", 00:19:24.185 "base_bdev": "nvme0n1", 00:19:24.185 "thin_provision": true, 00:19:24.185 "num_allocated_clusters": 0, 00:19:24.185 "snapshot": false, 00:19:24.185 "clone": false, 00:19:24.185 "esnap_clone": false 00:19:24.185 } 00:19:24.185 } 00:19:24.185 } 00:19:24.185 ]' 00:19:24.185 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:24.185 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:24.185 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:24.185 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:24.185 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:24.185 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:24.185 21:01:42 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:24.185 21:01:42 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:24.443 21:01:42 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:24.443 21:01:42 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:24.443 21:01:42 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:24.443 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:24.443 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:24.443 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:24.443 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:24.443 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 721e3a99-e634-451a-a5d6-a10c41fdd01b 00:19:24.443 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:24.443 { 00:19:24.443 "name": "721e3a99-e634-451a-a5d6-a10c41fdd01b", 00:19:24.443 "aliases": [ 00:19:24.443 "lvs/nvme0n1p0" 00:19:24.443 ], 00:19:24.443 "product_name": "Logical Volume", 00:19:24.443 "block_size": 4096, 00:19:24.443 "num_blocks": 26476544, 00:19:24.443 "uuid": "721e3a99-e634-451a-a5d6-a10c41fdd01b", 00:19:24.443 "assigned_rate_limits": { 00:19:24.443 "rw_ios_per_sec": 0, 00:19:24.443 "rw_mbytes_per_sec": 0, 00:19:24.443 "r_mbytes_per_sec": 0, 00:19:24.443 "w_mbytes_per_sec": 0 00:19:24.443 }, 00:19:24.443 "claimed": false, 00:19:24.443 "zoned": false, 00:19:24.443 "supported_io_types": { 00:19:24.443 "read": true, 00:19:24.443 "write": true, 00:19:24.443 "unmap": true, 00:19:24.443 "flush": false, 00:19:24.443 "reset": true, 00:19:24.443 "nvme_admin": false, 00:19:24.443 "nvme_io": false, 00:19:24.443 "nvme_io_md": false, 00:19:24.443 "write_zeroes": true, 00:19:24.443 "zcopy": false, 00:19:24.443 "get_zone_info": false, 00:19:24.443 "zone_management": false, 00:19:24.443 "zone_append": false, 00:19:24.443 "compare": false, 00:19:24.443 "compare_and_write": false, 00:19:24.443 "abort": false, 00:19:24.443 "seek_hole": true, 00:19:24.443 "seek_data": true, 00:19:24.443 "copy": false, 00:19:24.443 "nvme_iov_md": false 00:19:24.443 }, 00:19:24.443 "driver_specific": { 00:19:24.443 "lvol": { 00:19:24.443 "lvol_store_uuid": "a4d13566-26f7-4efb-8ec4-fa35e95f21aa", 00:19:24.443 "base_bdev": "nvme0n1", 00:19:24.443 "thin_provision": true, 00:19:24.443 "num_allocated_clusters": 0, 00:19:24.443 "snapshot": false, 00:19:24.443 "clone": false, 00:19:24.443 "esnap_clone": false 00:19:24.443 } 00:19:24.443 } 00:19:24.443 } 00:19:24.443 ]' 00:19:24.443 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:24.702 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:24.702 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:24.702 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:24.702 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:24.702 21:01:42 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:24.702 21:01:42 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:24.702 21:01:42 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 721e3a99-e634-451a-a5d6-a10c41fdd01b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:24.702 [2024-11-20 21:01:42.804360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.702 [2024-11-20 21:01:42.804502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:24.702 [2024-11-20 21:01:42.804554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:24.702 [2024-11-20 21:01:42.804576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.702 [2024-11-20 21:01:42.806649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.702 [2024-11-20 21:01:42.806760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:24.702 [2024-11-20 21:01:42.806808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:19:24.702 [2024-11-20 21:01:42.806842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.702 [2024-11-20 21:01:42.806933] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:24.702 [2024-11-20 21:01:42.807332] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:24.702 [2024-11-20 21:01:42.807467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.702 [2024-11-20 21:01:42.807486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:24.702 [2024-11-20 21:01:42.807503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:19:24.702 [2024-11-20 21:01:42.807514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.702 [2024-11-20 21:01:42.807715] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a 00:19:24.702 [2024-11-20 21:01:42.809010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.702 [2024-11-20 21:01:42.809031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:24.702 [2024-11-20 21:01:42.809040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:24.702 [2024-11-20 21:01:42.809047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.702 [2024-11-20 21:01:42.815881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.702 [2024-11-20 21:01:42.815906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:24.702 [2024-11-20 21:01:42.815915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.763 ms 00:19:24.702 [2024-11-20 21:01:42.815922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.702 [2024-11-20 21:01:42.816020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.702 [2024-11-20 21:01:42.816029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:24.702 [2024-11-20 21:01:42.816038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:24.702 [2024-11-20 21:01:42.816045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.702 [2024-11-20 21:01:42.816083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.702 [2024-11-20 21:01:42.816091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:24.702 [2024-11-20 21:01:42.816098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:24.702 [2024-11-20 21:01:42.816104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.702 [2024-11-20 21:01:42.816135] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:24.962 [2024-11-20 21:01:42.817742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.962 [2024-11-20 21:01:42.817781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:24.962 [2024-11-20 21:01:42.817789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:19:24.963 [2024-11-20 21:01:42.817800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.963 [2024-11-20 21:01:42.817838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.963 [2024-11-20 21:01:42.817847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:24.963 [2024-11-20 21:01:42.817854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:24.963 [2024-11-20 21:01:42.817863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.963 [2024-11-20 21:01:42.817895] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:24.963 [2024-11-20 21:01:42.818026] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:24.963 [2024-11-20 21:01:42.818036] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:24.963 [2024-11-20 21:01:42.818047] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:24.963 [2024-11-20 21:01:42.818057] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818066] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818073] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:24.963 [2024-11-20 21:01:42.818082] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:24.963 [2024-11-20 21:01:42.818088] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:24.963 [2024-11-20 21:01:42.818105] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:24.963 [2024-11-20 21:01:42.818114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.963 [2024-11-20 21:01:42.818121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:24.963 [2024-11-20 21:01:42.818127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:19:24.963 [2024-11-20 21:01:42.818135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.963 [2024-11-20 21:01:42.818205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.963 [2024-11-20 21:01:42.818216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:24.963 [2024-11-20 21:01:42.818222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:24.963 [2024-11-20 21:01:42.818229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.963 [2024-11-20 21:01:42.818340] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:24.963 [2024-11-20 21:01:42.818352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:24.963 [2024-11-20 21:01:42.818360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:24.963 [2024-11-20 21:01:42.818381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:24.963 [2024-11-20 21:01:42.818402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.963 [2024-11-20 21:01:42.818416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:24.963 [2024-11-20 21:01:42.818426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:24.963 [2024-11-20 21:01:42.818432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:24.963 [2024-11-20 21:01:42.818442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:24.963 [2024-11-20 21:01:42.818448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:24.963 [2024-11-20 21:01:42.818456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:24.963 [2024-11-20 21:01:42.818469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:24.963 [2024-11-20 21:01:42.818489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:24.963 [2024-11-20 21:01:42.818522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:24.963 [2024-11-20 21:01:42.818542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:24.963 [2024-11-20 21:01:42.818564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:24.963 [2024-11-20 21:01:42.818584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.963 [2024-11-20 21:01:42.818600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:24.963 [2024-11-20 21:01:42.818608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:24.963 [2024-11-20 21:01:42.818614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:24.963 [2024-11-20 21:01:42.818623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:24.963 [2024-11-20 21:01:42.818629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:24.963 [2024-11-20 21:01:42.818637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:24.963 [2024-11-20 21:01:42.818650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:24.963 [2024-11-20 21:01:42.818656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818663] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:24.963 [2024-11-20 21:01:42.818669] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:24.963 [2024-11-20 21:01:42.818680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:24.963 [2024-11-20 21:01:42.818695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:24.963 [2024-11-20 21:01:42.818701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:24.963 [2024-11-20 21:01:42.818709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:24.963 [2024-11-20 21:01:42.818716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:24.963 [2024-11-20 21:01:42.818723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:24.963 [2024-11-20 21:01:42.818729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:24.963 [2024-11-20 21:01:42.818740] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:24.963 [2024-11-20 21:01:42.818949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.963 [2024-11-20 21:01:42.819080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:24.963 [2024-11-20 21:01:42.819106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:24.963 [2024-11-20 21:01:42.819129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:24.963 [2024-11-20 21:01:42.819184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:24.963 [2024-11-20 21:01:42.819210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:24.963 [2024-11-20 21:01:42.819232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:24.963 [2024-11-20 21:01:42.819288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:24.963 [2024-11-20 21:01:42.819333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:24.963 [2024-11-20 21:01:42.819356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:24.963 [2024-11-20 21:01:42.819378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:24.963 [2024-11-20 21:01:42.819448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:24.963 [2024-11-20 21:01:42.819472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:24.963 [2024-11-20 21:01:42.819496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:24.963 [2024-11-20 21:01:42.819548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:24.963 [2024-11-20 21:01:42.819600] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:24.963 [2024-11-20 21:01:42.819643] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:24.963 [2024-11-20 21:01:42.819670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:24.963 [2024-11-20 21:01:42.819692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:24.964 [2024-11-20 21:01:42.819760] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:24.964 [2024-11-20 21:01:42.819786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:24.964 [2024-11-20 21:01:42.819811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:24.964 [2024-11-20 21:01:42.819853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:24.964 [2024-11-20 21:01:42.819884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.537 ms 00:19:24.964 [2024-11-20 21:01:42.819899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:24.964 [2024-11-20 21:01:42.819999] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:24.964 [2024-11-20 21:01:42.820088] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:27.496 [2024-11-20 21:01:45.204703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.204915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:27.496 [2024-11-20 21:01:45.204943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2384.690 ms 00:19:27.496 [2024-11-20 21:01:45.204956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.216137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.216278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.496 [2024-11-20 21:01:45.216302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.058 ms 00:19:27.496 [2024-11-20 21:01:45.216311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.216455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.216465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:27.496 [2024-11-20 21:01:45.216491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:27.496 [2024-11-20 21:01:45.216502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.234617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.234659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.496 [2024-11-20 21:01:45.234674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.083 ms 00:19:27.496 [2024-11-20 21:01:45.234683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.234793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.234805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.496 [2024-11-20 21:01:45.234819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:27.496 [2024-11-20 21:01:45.234827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.235255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.235287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.496 [2024-11-20 21:01:45.235299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:19:27.496 [2024-11-20 21:01:45.235308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.235454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.235464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.496 [2024-11-20 21:01:45.235476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:19:27.496 [2024-11-20 21:01:45.235487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.242856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.243049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.496 [2024-11-20 21:01:45.243073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.334 ms 00:19:27.496 [2024-11-20 21:01:45.243084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.253386] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:27.496 [2024-11-20 21:01:45.270500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.270638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:27.496 [2024-11-20 21:01:45.270654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.301 ms 00:19:27.496 [2024-11-20 21:01:45.270676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.330546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.330660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:27.496 [2024-11-20 21:01:45.330697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.742 ms 00:19:27.496 [2024-11-20 21:01:45.330732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.331368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.331436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:27.496 [2024-11-20 21:01:45.331467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:19:27.496 [2024-11-20 21:01:45.331498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.336300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.336336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:27.496 [2024-11-20 21:01:45.336346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.688 ms 00:19:27.496 [2024-11-20 21:01:45.336355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.339053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.339086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:27.496 [2024-11-20 21:01:45.339096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.652 ms 00:19:27.496 [2024-11-20 21:01:45.339105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.339425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.339443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:27.496 [2024-11-20 21:01:45.339452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:19:27.496 [2024-11-20 21:01:45.339463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.367397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.367433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:27.496 [2024-11-20 21:01:45.367443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.908 ms 00:19:27.496 [2024-11-20 21:01:45.367456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.371897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.371930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:27.496 [2024-11-20 21:01:45.371940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.360 ms 00:19:27.496 [2024-11-20 21:01:45.371950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.375039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.375072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:27.496 [2024-11-20 21:01:45.375082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:19:27.496 [2024-11-20 21:01:45.375092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.378529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.378562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:27.496 [2024-11-20 21:01:45.378571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.388 ms 00:19:27.496 [2024-11-20 21:01:45.378598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.378656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.378669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:27.496 [2024-11-20 21:01:45.378678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:27.496 [2024-11-20 21:01:45.378688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.378794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.496 [2024-11-20 21:01:45.378808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:27.496 [2024-11-20 21:01:45.378817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:27.496 [2024-11-20 21:01:45.378827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.496 [2024-11-20 21:01:45.379793] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:27.496 [2024-11-20 21:01:45.380783] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2575.095 ms, result 0 00:19:27.496 [2024-11-20 21:01:45.381775] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.496 { 00:19:27.496 "name": "ftl0", 00:19:27.496 "uuid": "901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a" 00:19:27.496 } 00:19:27.496 21:01:45 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:27.496 21:01:45 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:27.496 21:01:45 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:27.496 21:01:45 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:27.496 21:01:45 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:27.496 21:01:45 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:27.496 21:01:45 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:27.496 21:01:45 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:27.755 [ 00:19:27.755 { 00:19:27.755 "name": "ftl0", 00:19:27.755 "aliases": [ 00:19:27.755 "901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a" 00:19:27.755 ], 00:19:27.755 "product_name": "FTL disk", 00:19:27.755 "block_size": 4096, 00:19:27.755 "num_blocks": 23592960, 00:19:27.755 "uuid": "901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a", 00:19:27.755 "assigned_rate_limits": { 00:19:27.755 "rw_ios_per_sec": 0, 00:19:27.755 "rw_mbytes_per_sec": 0, 00:19:27.755 "r_mbytes_per_sec": 0, 00:19:27.755 "w_mbytes_per_sec": 0 00:19:27.755 }, 00:19:27.755 "claimed": false, 00:19:27.755 "zoned": false, 00:19:27.755 "supported_io_types": { 00:19:27.755 "read": true, 00:19:27.755 "write": true, 00:19:27.755 "unmap": true, 00:19:27.755 "flush": true, 00:19:27.755 "reset": false, 00:19:27.755 "nvme_admin": false, 00:19:27.755 "nvme_io": false, 00:19:27.755 "nvme_io_md": false, 00:19:27.755 "write_zeroes": true, 00:19:27.755 "zcopy": false, 00:19:27.755 "get_zone_info": false, 00:19:27.755 "zone_management": false, 00:19:27.755 "zone_append": false, 00:19:27.755 "compare": false, 00:19:27.755 "compare_and_write": false, 00:19:27.755 "abort": false, 00:19:27.755 "seek_hole": false, 00:19:27.755 "seek_data": false, 00:19:27.755 "copy": false, 00:19:27.755 "nvme_iov_md": false 00:19:27.755 }, 00:19:27.755 "driver_specific": { 00:19:27.755 "ftl": { 00:19:27.755 "base_bdev": "721e3a99-e634-451a-a5d6-a10c41fdd01b", 00:19:27.755 "cache": "nvc0n1p0" 00:19:27.755 } 00:19:27.755 } 00:19:27.755 } 00:19:27.755 ] 00:19:27.755 21:01:45 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:27.755 21:01:45 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:27.755 21:01:45 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:28.014 21:01:46 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:28.014 21:01:46 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:28.273 21:01:46 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:28.273 { 00:19:28.273 "name": "ftl0", 00:19:28.273 "aliases": [ 00:19:28.273 "901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a" 00:19:28.273 ], 00:19:28.273 "product_name": "FTL disk", 00:19:28.273 "block_size": 4096, 00:19:28.273 "num_blocks": 23592960, 00:19:28.273 "uuid": "901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a", 00:19:28.273 "assigned_rate_limits": { 00:19:28.273 "rw_ios_per_sec": 0, 00:19:28.273 "rw_mbytes_per_sec": 0, 00:19:28.273 "r_mbytes_per_sec": 0, 00:19:28.273 "w_mbytes_per_sec": 0 00:19:28.273 }, 00:19:28.273 "claimed": false, 00:19:28.273 "zoned": false, 00:19:28.273 "supported_io_types": { 00:19:28.273 "read": true, 00:19:28.273 "write": true, 00:19:28.273 "unmap": true, 00:19:28.273 "flush": true, 00:19:28.273 "reset": false, 00:19:28.273 "nvme_admin": false, 00:19:28.273 "nvme_io": false, 00:19:28.273 "nvme_io_md": false, 00:19:28.273 "write_zeroes": true, 00:19:28.273 "zcopy": false, 00:19:28.273 "get_zone_info": false, 00:19:28.273 "zone_management": false, 00:19:28.273 "zone_append": false, 00:19:28.273 "compare": false, 00:19:28.273 "compare_and_write": false, 00:19:28.273 "abort": false, 00:19:28.273 "seek_hole": false, 00:19:28.273 "seek_data": false, 00:19:28.273 "copy": false, 00:19:28.273 "nvme_iov_md": false 00:19:28.273 }, 00:19:28.273 "driver_specific": { 00:19:28.273 "ftl": { 00:19:28.273 "base_bdev": "721e3a99-e634-451a-a5d6-a10c41fdd01b", 00:19:28.273 "cache": "nvc0n1p0" 00:19:28.273 } 00:19:28.273 } 00:19:28.273 } 00:19:28.273 ]' 00:19:28.273 21:01:46 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:28.273 21:01:46 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:28.273 21:01:46 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:28.534 [2024-11-20 21:01:46.409998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.410042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:28.534 [2024-11-20 21:01:46.410057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:28.534 [2024-11-20 21:01:46.410066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.410106] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:28.534 [2024-11-20 21:01:46.410672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.410693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:28.534 [2024-11-20 21:01:46.410702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:19:28.534 [2024-11-20 21:01:46.410725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.411309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.411322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:28.534 [2024-11-20 21:01:46.411331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:19:28.534 [2024-11-20 21:01:46.411341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.414994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.415018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:28.534 [2024-11-20 21:01:46.415028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.633 ms 00:19:28.534 [2024-11-20 21:01:46.415039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.421961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.421992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:28.534 [2024-11-20 21:01:46.422002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.872 ms 00:19:28.534 [2024-11-20 21:01:46.422013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.423801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.423832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:28.534 [2024-11-20 21:01:46.423841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:19:28.534 [2024-11-20 21:01:46.423850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.428189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.428221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:28.534 [2024-11-20 21:01:46.428231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.294 ms 00:19:28.534 [2024-11-20 21:01:46.428241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.428430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.428446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:28.534 [2024-11-20 21:01:46.428455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:19:28.534 [2024-11-20 21:01:46.428464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.430323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.430353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:28.534 [2024-11-20 21:01:46.430361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.826 ms 00:19:28.534 [2024-11-20 21:01:46.430372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.431866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.431895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:28.534 [2024-11-20 21:01:46.431903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:19:28.534 [2024-11-20 21:01:46.431912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.433074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.433103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:28.534 [2024-11-20 21:01:46.433111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:19:28.534 [2024-11-20 21:01:46.433120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.434269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.534 [2024-11-20 21:01:46.434306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:28.534 [2024-11-20 21:01:46.434314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:19:28.534 [2024-11-20 21:01:46.434323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.534 [2024-11-20 21:01:46.434366] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:28.534 [2024-11-20 21:01:46.434381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:28.534 [2024-11-20 21:01:46.434684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.434999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:28.535 [2024-11-20 21:01:46.435255] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:28.535 [2024-11-20 21:01:46.435263] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a 00:19:28.535 [2024-11-20 21:01:46.435272] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:28.535 [2024-11-20 21:01:46.435280] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:28.535 [2024-11-20 21:01:46.435292] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:28.535 [2024-11-20 21:01:46.435299] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:28.535 [2024-11-20 21:01:46.435309] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:28.535 [2024-11-20 21:01:46.435317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:28.535 [2024-11-20 21:01:46.435326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:28.535 [2024-11-20 21:01:46.435332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:28.535 [2024-11-20 21:01:46.435340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:28.535 [2024-11-20 21:01:46.435347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.535 [2024-11-20 21:01:46.435356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:28.535 [2024-11-20 21:01:46.435365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.983 ms 00:19:28.535 [2024-11-20 21:01:46.435376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.535 [2024-11-20 21:01:46.437272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.535 [2024-11-20 21:01:46.437296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:28.535 [2024-11-20 21:01:46.437316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.830 ms 00:19:28.535 [2024-11-20 21:01:46.437325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.535 [2024-11-20 21:01:46.437436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.535 [2024-11-20 21:01:46.437447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:28.535 [2024-11-20 21:01:46.437455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:28.535 [2024-11-20 21:01:46.437464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.535 [2024-11-20 21:01:46.443976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.535 [2024-11-20 21:01:46.444008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:28.535 [2024-11-20 21:01:46.444021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.535 [2024-11-20 21:01:46.444030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.535 [2024-11-20 21:01:46.444120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.535 [2024-11-20 21:01:46.444133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:28.535 [2024-11-20 21:01:46.444142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.535 [2024-11-20 21:01:46.444153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.535 [2024-11-20 21:01:46.444220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.535 [2024-11-20 21:01:46.444234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:28.536 [2024-11-20 21:01:46.444242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.536 [2024-11-20 21:01:46.444251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.536 [2024-11-20 21:01:46.444287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.536 [2024-11-20 21:01:46.444298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:28.536 [2024-11-20 21:01:46.444306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.536 [2024-11-20 21:01:46.444315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.536 [2024-11-20 21:01:46.456052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.536 [2024-11-20 21:01:46.456098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:28.536 [2024-11-20 21:01:46.456109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.536 [2024-11-20 21:01:46.456119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.536 [2024-11-20 21:01:46.465937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.536 [2024-11-20 21:01:46.465975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:28.536 [2024-11-20 21:01:46.465985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.536 [2024-11-20 21:01:46.465998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.536 [2024-11-20 21:01:46.466053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.536 [2024-11-20 21:01:46.466065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.536 [2024-11-20 21:01:46.466076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.536 [2024-11-20 21:01:46.466086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.536 [2024-11-20 21:01:46.466146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.536 [2024-11-20 21:01:46.466156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.536 [2024-11-20 21:01:46.466164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.536 [2024-11-20 21:01:46.466174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.536 [2024-11-20 21:01:46.466290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.536 [2024-11-20 21:01:46.466303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.536 [2024-11-20 21:01:46.466313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.536 [2024-11-20 21:01:46.466322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.536 [2024-11-20 21:01:46.466372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.536 [2024-11-20 21:01:46.466383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:28.536 [2024-11-20 21:01:46.466391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.536 [2024-11-20 21:01:46.466402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.536 [2024-11-20 21:01:46.466449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.536 [2024-11-20 21:01:46.466459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:28.536 [2024-11-20 21:01:46.466467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.536 [2024-11-20 21:01:46.466478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.536 [2024-11-20 21:01:46.466534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:28.536 [2024-11-20 21:01:46.466547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:28.536 [2024-11-20 21:01:46.466556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:28.536 [2024-11-20 21:01:46.466566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.536 [2024-11-20 21:01:46.466785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.745 ms, result 0 00:19:28.536 true 00:19:28.536 21:01:46 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87329 00:19:28.536 21:01:46 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87329 ']' 00:19:28.536 21:01:46 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87329 00:19:28.536 21:01:46 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:28.536 21:01:46 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.536 21:01:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87329 00:19:28.536 killing process with pid 87329 00:19:28.536 21:01:46 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.536 21:01:46 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.536 21:01:46 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87329' 00:19:28.536 21:01:46 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87329 00:19:28.536 21:01:46 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87329 00:19:33.811 21:01:51 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:34.754 65536+0 records in 00:19:34.754 65536+0 records out 00:19:34.754 268435456 bytes (268 MB, 256 MiB) copied, 1.09528 s, 245 MB/s 00:19:34.754 21:01:52 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:34.754 [2024-11-20 21:01:52.789926] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:19:34.754 [2024-11-20 21:01:52.790072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87505 ] 00:19:35.015 [2024-11-20 21:01:52.938569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.015 [2024-11-20 21:01:52.979505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.015 [2024-11-20 21:01:53.129352] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.015 [2024-11-20 21:01:53.129434] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:35.278 [2024-11-20 21:01:53.292615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.278 [2024-11-20 21:01:53.292674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:35.278 [2024-11-20 21:01:53.292690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:35.278 [2024-11-20 21:01:53.292700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.278 [2024-11-20 21:01:53.295449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.278 [2024-11-20 21:01:53.295498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:35.278 [2024-11-20 21:01:53.295511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.727 ms 00:19:35.278 [2024-11-20 21:01:53.295526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.278 [2024-11-20 21:01:53.295639] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:35.278 [2024-11-20 21:01:53.295970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:35.278 [2024-11-20 21:01:53.295988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.278 [2024-11-20 21:01:53.296000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:35.278 [2024-11-20 21:01:53.296012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:19:35.278 [2024-11-20 21:01:53.296021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.278 [2024-11-20 21:01:53.298543] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:35.278 [2024-11-20 21:01:53.303383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.278 [2024-11-20 21:01:53.303427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:35.278 [2024-11-20 21:01:53.303446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.843 ms 00:19:35.278 [2024-11-20 21:01:53.303455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.278 [2024-11-20 21:01:53.303551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.278 [2024-11-20 21:01:53.303563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:35.278 [2024-11-20 21:01:53.303575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:35.278 [2024-11-20 21:01:53.303588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.278 [2024-11-20 21:01:53.315252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.278 [2024-11-20 21:01:53.315293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:35.278 [2024-11-20 21:01:53.315305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.612 ms 00:19:35.278 [2024-11-20 21:01:53.315314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.278 [2024-11-20 21:01:53.315476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.278 [2024-11-20 21:01:53.315493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:35.278 [2024-11-20 21:01:53.315503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:35.278 [2024-11-20 21:01:53.315512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.279 [2024-11-20 21:01:53.315545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.279 [2024-11-20 21:01:53.315554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:35.279 [2024-11-20 21:01:53.315563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:35.279 [2024-11-20 21:01:53.315570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.279 [2024-11-20 21:01:53.315595] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:35.279 [2024-11-20 21:01:53.318325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.279 [2024-11-20 21:01:53.318362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:35.279 [2024-11-20 21:01:53.318374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.738 ms 00:19:35.279 [2024-11-20 21:01:53.318383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.279 [2024-11-20 21:01:53.318441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.279 [2024-11-20 21:01:53.318450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:35.279 [2024-11-20 21:01:53.318460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:35.279 [2024-11-20 21:01:53.318472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.279 [2024-11-20 21:01:53.318494] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:35.279 [2024-11-20 21:01:53.318518] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:35.279 [2024-11-20 21:01:53.318568] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:35.279 [2024-11-20 21:01:53.318593] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:35.279 [2024-11-20 21:01:53.318704] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:35.279 [2024-11-20 21:01:53.318717] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:35.279 [2024-11-20 21:01:53.318730] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:35.279 [2024-11-20 21:01:53.318742] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:35.279 [2024-11-20 21:01:53.318766] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:35.279 [2024-11-20 21:01:53.318775] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:35.279 [2024-11-20 21:01:53.318783] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:35.279 [2024-11-20 21:01:53.318791] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:35.279 [2024-11-20 21:01:53.318808] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:35.279 [2024-11-20 21:01:53.318818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.279 [2024-11-20 21:01:53.318829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:35.279 [2024-11-20 21:01:53.318840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:19:35.279 [2024-11-20 21:01:53.318851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.279 [2024-11-20 21:01:53.318941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.279 [2024-11-20 21:01:53.318952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:35.279 [2024-11-20 21:01:53.318962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:35.279 [2024-11-20 21:01:53.318970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.279 [2024-11-20 21:01:53.319073] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:35.279 [2024-11-20 21:01:53.319086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:35.279 [2024-11-20 21:01:53.319101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.279 [2024-11-20 21:01:53.319110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:35.279 [2024-11-20 21:01:53.319130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:35.279 [2024-11-20 21:01:53.319152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:35.279 [2024-11-20 21:01:53.319161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.279 [2024-11-20 21:01:53.319178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:35.279 [2024-11-20 21:01:53.319193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:35.279 [2024-11-20 21:01:53.319201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:35.279 [2024-11-20 21:01:53.319210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:35.279 [2024-11-20 21:01:53.319218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:35.279 [2024-11-20 21:01:53.319225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:35.279 [2024-11-20 21:01:53.319241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:35.279 [2024-11-20 21:01:53.319250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:35.279 [2024-11-20 21:01:53.319266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.279 [2024-11-20 21:01:53.319282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:35.279 [2024-11-20 21:01:53.319295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.279 [2024-11-20 21:01:53.319310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:35.279 [2024-11-20 21:01:53.319317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.279 [2024-11-20 21:01:53.319332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:35.279 [2024-11-20 21:01:53.319339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:35.279 [2024-11-20 21:01:53.319353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:35.279 [2024-11-20 21:01:53.319361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.279 [2024-11-20 21:01:53.319374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:35.279 [2024-11-20 21:01:53.319381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:35.279 [2024-11-20 21:01:53.319388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:35.279 [2024-11-20 21:01:53.319394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:35.279 [2024-11-20 21:01:53.319401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:35.279 [2024-11-20 21:01:53.319410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:35.279 [2024-11-20 21:01:53.319424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:35.279 [2024-11-20 21:01:53.319433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.279 [2024-11-20 21:01:53.319441] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:35.279 [2024-11-20 21:01:53.319453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:35.279 [2024-11-20 21:01:53.319461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:35.279 [2024-11-20 21:01:53.319469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:35.280 [2024-11-20 21:01:53.319477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:35.280 [2024-11-20 21:01:53.319485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:35.280 [2024-11-20 21:01:53.319493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:35.280 [2024-11-20 21:01:53.319500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:35.280 [2024-11-20 21:01:53.319506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:35.280 [2024-11-20 21:01:53.319513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:35.280 [2024-11-20 21:01:53.319522] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:35.280 [2024-11-20 21:01:53.319532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.280 [2024-11-20 21:01:53.319543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:35.280 [2024-11-20 21:01:53.319551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:35.280 [2024-11-20 21:01:53.319559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:35.280 [2024-11-20 21:01:53.319567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:35.280 [2024-11-20 21:01:53.319574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:35.280 [2024-11-20 21:01:53.319581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:35.280 [2024-11-20 21:01:53.319588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:35.280 [2024-11-20 21:01:53.319595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:35.280 [2024-11-20 21:01:53.319603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:35.280 [2024-11-20 21:01:53.319616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:35.280 [2024-11-20 21:01:53.319627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:35.280 [2024-11-20 21:01:53.319636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:35.280 [2024-11-20 21:01:53.319643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:35.280 [2024-11-20 21:01:53.319651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:35.280 [2024-11-20 21:01:53.319658] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:35.280 [2024-11-20 21:01:53.319667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:35.280 [2024-11-20 21:01:53.319681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:35.280 [2024-11-20 21:01:53.319689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:35.280 [2024-11-20 21:01:53.319698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:35.280 [2024-11-20 21:01:53.319706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:35.280 [2024-11-20 21:01:53.319715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.280 [2024-11-20 21:01:53.319724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:35.280 [2024-11-20 21:01:53.319736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.711 ms 00:19:35.280 [2024-11-20 21:01:53.319744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.280 [2024-11-20 21:01:53.340072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.280 [2024-11-20 21:01:53.340114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:35.280 [2024-11-20 21:01:53.340126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.242 ms 00:19:35.280 [2024-11-20 21:01:53.340135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.280 [2024-11-20 21:01:53.340276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.280 [2024-11-20 21:01:53.340288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:35.280 [2024-11-20 21:01:53.340310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:35.280 [2024-11-20 21:01:53.340318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.280 [2024-11-20 21:01:53.366961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.280 [2024-11-20 21:01:53.367018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.280 [2024-11-20 21:01:53.367040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.613 ms 00:19:35.280 [2024-11-20 21:01:53.367052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.280 [2024-11-20 21:01:53.367173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.280 [2024-11-20 21:01:53.367193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.280 [2024-11-20 21:01:53.367205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:35.280 [2024-11-20 21:01:53.367219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.280 [2024-11-20 21:01:53.367978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.280 [2024-11-20 21:01:53.368007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.280 [2024-11-20 21:01:53.368020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:19:35.280 [2024-11-20 21:01:53.368036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.280 [2024-11-20 21:01:53.368236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.280 [2024-11-20 21:01:53.368248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.280 [2024-11-20 21:01:53.368263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:19:35.280 [2024-11-20 21:01:53.368273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.280 [2024-11-20 21:01:53.380473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.280 [2024-11-20 21:01:53.380523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.280 [2024-11-20 21:01:53.380538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.171 ms 00:19:35.280 [2024-11-20 21:01:53.380551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.280 [2024-11-20 21:01:53.385290] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:35.280 [2024-11-20 21:01:53.385347] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.280 [2024-11-20 21:01:53.385360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.280 [2024-11-20 21:01:53.385370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.280 [2024-11-20 21:01:53.385380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.675 ms 00:19:35.280 [2024-11-20 21:01:53.385388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.402150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.402196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.541 [2024-11-20 21:01:53.402218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.640 ms 00:19:35.541 [2024-11-20 21:01:53.402228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.405360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.405404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.541 [2024-11-20 21:01:53.405415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.001 ms 00:19:35.541 [2024-11-20 21:01:53.405422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.408195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.408238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.541 [2024-11-20 21:01:53.408247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:19:35.541 [2024-11-20 21:01:53.408255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.408629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.408643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.541 [2024-11-20 21:01:53.408656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:19:35.541 [2024-11-20 21:01:53.408664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.440608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.440667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.541 [2024-11-20 21:01:53.440681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.919 ms 00:19:35.541 [2024-11-20 21:01:53.440691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.450499] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:35.541 [2024-11-20 21:01:53.476121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.476165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.541 [2024-11-20 21:01:53.476180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.306 ms 00:19:35.541 [2024-11-20 21:01:53.476190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.476318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.476332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.541 [2024-11-20 21:01:53.476343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:35.541 [2024-11-20 21:01:53.476353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.476429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.476439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.541 [2024-11-20 21:01:53.476454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:35.541 [2024-11-20 21:01:53.476464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.476494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.476505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.541 [2024-11-20 21:01:53.476517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:35.541 [2024-11-20 21:01:53.476531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.476571] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.541 [2024-11-20 21:01:53.476585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.476593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.541 [2024-11-20 21:01:53.476601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:35.541 [2024-11-20 21:01:53.476610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.483342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.483389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.541 [2024-11-20 21:01:53.483401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.702 ms 00:19:35.541 [2024-11-20 21:01:53.483421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.483530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.541 [2024-11-20 21:01:53.483547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.541 [2024-11-20 21:01:53.483557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:35.541 [2024-11-20 21:01:53.483566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.541 [2024-11-20 21:01:53.485139] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.541 [2024-11-20 21:01:53.486639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 192.110 ms, result 0 00:19:35.541 [2024-11-20 21:01:53.488352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.541 [2024-11-20 21:01:53.495387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.488  [2024-11-20T21:01:55.549Z] Copying: 15/256 [MB] (15 MBps) [2024-11-20T21:01:56.931Z] Copying: 35/256 [MB] (19 MBps) [2024-11-20T21:01:57.499Z] Copying: 77/256 [MB] (41 MBps) [2024-11-20T21:01:58.938Z] Copying: 112/256 [MB] (35 MBps) [2024-11-20T21:01:59.510Z] Copying: 144/256 [MB] (31 MBps) [2024-11-20T21:02:00.897Z] Copying: 182/256 [MB] (38 MBps) [2024-11-20T21:02:01.842Z] Copying: 215/256 [MB] (33 MBps) [2024-11-20T21:02:01.842Z] Copying: 256/256 [MB] (average 32 MBps)[2024-11-20 21:02:01.493687] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:43.723 [2024-11-20 21:02:01.494830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.723 [2024-11-20 21:02:01.494858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:43.723 [2024-11-20 21:02:01.494876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:43.723 [2024-11-20 21:02:01.494884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.723 [2024-11-20 21:02:01.494905] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:43.723 [2024-11-20 21:02:01.495328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.723 [2024-11-20 21:02:01.495342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:43.723 [2024-11-20 21:02:01.495351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:19:43.723 [2024-11-20 21:02:01.495358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.723 [2024-11-20 21:02:01.496993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.723 [2024-11-20 21:02:01.497022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:43.723 [2024-11-20 21:02:01.497032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.615 ms 00:19:43.723 [2024-11-20 21:02:01.497040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.723 [2024-11-20 21:02:01.503258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.723 [2024-11-20 21:02:01.503287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:43.723 [2024-11-20 21:02:01.503297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.193 ms 00:19:43.723 [2024-11-20 21:02:01.503305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.723 [2024-11-20 21:02:01.510262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.723 [2024-11-20 21:02:01.510305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:43.723 [2024-11-20 21:02:01.510316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.926 ms 00:19:43.723 [2024-11-20 21:02:01.510323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.723 [2024-11-20 21:02:01.511782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.723 [2024-11-20 21:02:01.511811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:43.724 [2024-11-20 21:02:01.511820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:19:43.724 [2024-11-20 21:02:01.511827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.724 [2024-11-20 21:02:01.515414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.724 [2024-11-20 21:02:01.515448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:43.724 [2024-11-20 21:02:01.515463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.532 ms 00:19:43.724 [2024-11-20 21:02:01.515471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.724 [2024-11-20 21:02:01.515618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.724 [2024-11-20 21:02:01.515628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:43.724 [2024-11-20 21:02:01.515637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:43.724 [2024-11-20 21:02:01.515644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.724 [2024-11-20 21:02:01.517688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.724 [2024-11-20 21:02:01.517717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:43.724 [2024-11-20 21:02:01.517726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:19:43.724 [2024-11-20 21:02:01.517733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.724 [2024-11-20 21:02:01.519081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.724 [2024-11-20 21:02:01.519120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:43.724 [2024-11-20 21:02:01.519129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:19:43.724 [2024-11-20 21:02:01.519136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.724 [2024-11-20 21:02:01.520216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.724 [2024-11-20 21:02:01.520239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:43.724 [2024-11-20 21:02:01.520248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:19:43.724 [2024-11-20 21:02:01.520256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.724 [2024-11-20 21:02:01.521529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.724 [2024-11-20 21:02:01.521556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:43.724 [2024-11-20 21:02:01.521564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.214 ms 00:19:43.724 [2024-11-20 21:02:01.521572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.724 [2024-11-20 21:02:01.521601] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:43.724 [2024-11-20 21:02:01.521615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.521996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:43.724 [2024-11-20 21:02:01.522134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:43.725 [2024-11-20 21:02:01.522413] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:43.725 [2024-11-20 21:02:01.522421] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a 00:19:43.725 [2024-11-20 21:02:01.522429] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:43.725 [2024-11-20 21:02:01.522436] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:43.725 [2024-11-20 21:02:01.522443] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:43.725 [2024-11-20 21:02:01.522452] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:43.725 [2024-11-20 21:02:01.522459] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:43.725 [2024-11-20 21:02:01.522467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:43.725 [2024-11-20 21:02:01.522474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:43.725 [2024-11-20 21:02:01.522481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:43.725 [2024-11-20 21:02:01.522487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:43.725 [2024-11-20 21:02:01.522494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.725 [2024-11-20 21:02:01.522501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:43.725 [2024-11-20 21:02:01.522513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:19:43.725 [2024-11-20 21:02:01.522520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.523998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.725 [2024-11-20 21:02:01.524015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:43.725 [2024-11-20 21:02:01.524025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:19:43.725 [2024-11-20 21:02:01.524033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.524112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.725 [2024-11-20 21:02:01.524120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:43.725 [2024-11-20 21:02:01.524129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:43.725 [2024-11-20 21:02:01.524136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.529191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.529225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.725 [2024-11-20 21:02:01.529235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.529243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.529310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.529318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.725 [2024-11-20 21:02:01.529326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.529333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.529377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.529386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.725 [2024-11-20 21:02:01.529394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.529401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.529419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.529426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.725 [2024-11-20 21:02:01.529436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.529448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.538256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.538299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.725 [2024-11-20 21:02:01.538310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.538317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.545220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.545264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.725 [2024-11-20 21:02:01.545275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.545282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.545320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.545328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.725 [2024-11-20 21:02:01.545336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.545347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.545377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.545386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.725 [2024-11-20 21:02:01.545393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.545403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.545462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.545471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.725 [2024-11-20 21:02:01.545479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.545487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.545518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.545533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:43.725 [2024-11-20 21:02:01.545541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.545551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.545592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.545600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.725 [2024-11-20 21:02:01.545608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.545615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.725 [2024-11-20 21:02:01.545660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.725 [2024-11-20 21:02:01.545670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.725 [2024-11-20 21:02:01.545678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.725 [2024-11-20 21:02:01.545687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.726 [2024-11-20 21:02:01.545840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.985 ms, result 0 00:19:43.987 00:19:43.987 00:19:43.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.987 21:02:01 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=87602 00:19:43.987 21:02:01 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 87602 00:19:43.987 21:02:01 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:43.987 21:02:01 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87602 ']' 00:19:43.987 21:02:01 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.987 21:02:01 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:43.987 21:02:01 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.987 21:02:01 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:43.987 21:02:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:43.987 [2024-11-20 21:02:02.029504] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:19:43.987 [2024-11-20 21:02:02.029619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87602 ] 00:19:44.248 [2024-11-20 21:02:02.175822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.248 [2024-11-20 21:02:02.193781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.821 21:02:02 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.821 21:02:02 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:44.822 21:02:02 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:45.084 [2024-11-20 21:02:03.060587] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:45.084 [2024-11-20 21:02:03.060647] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:45.347 [2024-11-20 21:02:03.214577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.214623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:45.347 [2024-11-20 21:02:03.214635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:45.347 [2024-11-20 21:02:03.214645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.216891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.216929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.347 [2024-11-20 21:02:03.216938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:19:45.347 [2024-11-20 21:02:03.216947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.217148] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:45.347 [2024-11-20 21:02:03.217379] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:45.347 [2024-11-20 21:02:03.217404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.217418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.347 [2024-11-20 21:02:03.217426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:45.347 [2024-11-20 21:02:03.217436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.218606] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:45.347 [2024-11-20 21:02:03.220707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.220758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:45.347 [2024-11-20 21:02:03.220770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.100 ms 00:19:45.347 [2024-11-20 21:02:03.220777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.220830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.220839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:45.347 [2024-11-20 21:02:03.220851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:45.347 [2024-11-20 21:02:03.220859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.225384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.225414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.347 [2024-11-20 21:02:03.225425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.474 ms 00:19:45.347 [2024-11-20 21:02:03.225432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.225524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.225535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.347 [2024-11-20 21:02:03.225544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:45.347 [2024-11-20 21:02:03.225551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.225580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.225589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:45.347 [2024-11-20 21:02:03.225598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:45.347 [2024-11-20 21:02:03.225606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.225629] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:45.347 [2024-11-20 21:02:03.226928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.226960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.347 [2024-11-20 21:02:03.226973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:19:45.347 [2024-11-20 21:02:03.226986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.227022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.227031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:45.347 [2024-11-20 21:02:03.227039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:45.347 [2024-11-20 21:02:03.227047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.227066] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:45.347 [2024-11-20 21:02:03.227083] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:45.347 [2024-11-20 21:02:03.227117] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:45.347 [2024-11-20 21:02:03.227140] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:45.347 [2024-11-20 21:02:03.227240] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:45.347 [2024-11-20 21:02:03.227259] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:45.347 [2024-11-20 21:02:03.227272] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:45.347 [2024-11-20 21:02:03.227284] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:45.347 [2024-11-20 21:02:03.227292] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:45.347 [2024-11-20 21:02:03.227303] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:45.347 [2024-11-20 21:02:03.227310] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:45.347 [2024-11-20 21:02:03.227319] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:45.347 [2024-11-20 21:02:03.227325] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:45.347 [2024-11-20 21:02:03.227337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.227344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:45.347 [2024-11-20 21:02:03.227353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:45.347 [2024-11-20 21:02:03.227359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.227460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.347 [2024-11-20 21:02:03.227474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:45.347 [2024-11-20 21:02:03.227486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:45.347 [2024-11-20 21:02:03.227493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.347 [2024-11-20 21:02:03.227593] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:45.347 [2024-11-20 21:02:03.227608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:45.347 [2024-11-20 21:02:03.227619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.347 [2024-11-20 21:02:03.227627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.347 [2024-11-20 21:02:03.227641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:45.347 [2024-11-20 21:02:03.227649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:45.347 [2024-11-20 21:02:03.227658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:45.347 [2024-11-20 21:02:03.227667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:45.347 [2024-11-20 21:02:03.227677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:45.347 [2024-11-20 21:02:03.227684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.347 [2024-11-20 21:02:03.227693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:45.347 [2024-11-20 21:02:03.227701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:45.347 [2024-11-20 21:02:03.227710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.347 [2024-11-20 21:02:03.227718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:45.347 [2024-11-20 21:02:03.227728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:45.347 [2024-11-20 21:02:03.227735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.347 [2024-11-20 21:02:03.227754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:45.347 [2024-11-20 21:02:03.227763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:45.347 [2024-11-20 21:02:03.227771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.347 [2024-11-20 21:02:03.227779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:45.347 [2024-11-20 21:02:03.227791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:45.347 [2024-11-20 21:02:03.227799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.347 [2024-11-20 21:02:03.227807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:45.347 [2024-11-20 21:02:03.227815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:45.347 [2024-11-20 21:02:03.227824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.348 [2024-11-20 21:02:03.227831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:45.348 [2024-11-20 21:02:03.227840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:45.348 [2024-11-20 21:02:03.227848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.348 [2024-11-20 21:02:03.227857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:45.348 [2024-11-20 21:02:03.227864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:45.348 [2024-11-20 21:02:03.227875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.348 [2024-11-20 21:02:03.227882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:45.348 [2024-11-20 21:02:03.227892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:45.348 [2024-11-20 21:02:03.227899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.348 [2024-11-20 21:02:03.227908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:45.348 [2024-11-20 21:02:03.227915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:45.348 [2024-11-20 21:02:03.227926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.348 [2024-11-20 21:02:03.227934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:45.348 [2024-11-20 21:02:03.227943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:45.348 [2024-11-20 21:02:03.227951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.348 [2024-11-20 21:02:03.227961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:45.348 [2024-11-20 21:02:03.227967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:45.348 [2024-11-20 21:02:03.227975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.348 [2024-11-20 21:02:03.227981] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:45.348 [2024-11-20 21:02:03.227990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:45.348 [2024-11-20 21:02:03.227997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.348 [2024-11-20 21:02:03.228009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.348 [2024-11-20 21:02:03.228016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:45.348 [2024-11-20 21:02:03.228024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:45.348 [2024-11-20 21:02:03.228030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:45.348 [2024-11-20 21:02:03.228039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:45.348 [2024-11-20 21:02:03.228045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:45.348 [2024-11-20 21:02:03.228054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:45.348 [2024-11-20 21:02:03.228062] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:45.348 [2024-11-20 21:02:03.228074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.348 [2024-11-20 21:02:03.228083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:45.348 [2024-11-20 21:02:03.228092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:45.348 [2024-11-20 21:02:03.228099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:45.348 [2024-11-20 21:02:03.228107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:45.348 [2024-11-20 21:02:03.228114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:45.348 [2024-11-20 21:02:03.228123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:45.348 [2024-11-20 21:02:03.228130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:45.348 [2024-11-20 21:02:03.228138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:45.348 [2024-11-20 21:02:03.228144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:45.348 [2024-11-20 21:02:03.228153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:45.348 [2024-11-20 21:02:03.228160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:45.348 [2024-11-20 21:02:03.228168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:45.348 [2024-11-20 21:02:03.228175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:45.348 [2024-11-20 21:02:03.228189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:45.348 [2024-11-20 21:02:03.228196] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:45.348 [2024-11-20 21:02:03.228207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.348 [2024-11-20 21:02:03.228219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:45.348 [2024-11-20 21:02:03.228228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:45.348 [2024-11-20 21:02:03.228235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:45.348 [2024-11-20 21:02:03.228244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:45.348 [2024-11-20 21:02:03.228251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.228260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:45.348 [2024-11-20 21:02:03.228268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:19:45.348 [2024-11-20 21:02:03.228277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.236519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.236554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:45.348 [2024-11-20 21:02:03.236563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.187 ms 00:19:45.348 [2024-11-20 21:02:03.236576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.236674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.236698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:45.348 [2024-11-20 21:02:03.236706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:45.348 [2024-11-20 21:02:03.236714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.244910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.244946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:45.348 [2024-11-20 21:02:03.244958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.176 ms 00:19:45.348 [2024-11-20 21:02:03.244967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.245021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.245032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:45.348 [2024-11-20 21:02:03.245040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:45.348 [2024-11-20 21:02:03.245049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.245354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.245379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:45.348 [2024-11-20 21:02:03.245388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:19:45.348 [2024-11-20 21:02:03.245397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.245517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.245538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:45.348 [2024-11-20 21:02:03.245547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:19:45.348 [2024-11-20 21:02:03.245557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.250787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.250822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:45.348 [2024-11-20 21:02:03.250831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.209 ms 00:19:45.348 [2024-11-20 21:02:03.250840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.253002] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:45.348 [2024-11-20 21:02:03.253038] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:45.348 [2024-11-20 21:02:03.253048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.253058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:45.348 [2024-11-20 21:02:03.253066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:19:45.348 [2024-11-20 21:02:03.253075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.267339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.267375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:45.348 [2024-11-20 21:02:03.267385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.223 ms 00:19:45.348 [2024-11-20 21:02:03.267396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.269114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.269149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:45.348 [2024-11-20 21:02:03.269157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:19:45.348 [2024-11-20 21:02:03.269166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.348 [2024-11-20 21:02:03.270484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.348 [2024-11-20 21:02:03.270519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:45.348 [2024-11-20 21:02:03.270527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:19:45.349 [2024-11-20 21:02:03.270535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.349 [2024-11-20 21:02:03.270862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.349 [2024-11-20 21:02:03.270881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:45.349 [2024-11-20 21:02:03.270889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:19:45.349 [2024-11-20 21:02:03.270898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.349 [2024-11-20 21:02:03.295690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.349 [2024-11-20 21:02:03.295766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:45.349 [2024-11-20 21:02:03.295788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.769 ms 00:19:45.349 [2024-11-20 21:02:03.295804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.349 [2024-11-20 21:02:03.303727] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:45.349 [2024-11-20 21:02:03.316770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.349 [2024-11-20 21:02:03.316808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:45.349 [2024-11-20 21:02:03.316821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.878 ms 00:19:45.349 [2024-11-20 21:02:03.316828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.349 [2024-11-20 21:02:03.316911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.349 [2024-11-20 21:02:03.316921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:45.349 [2024-11-20 21:02:03.316934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:45.349 [2024-11-20 21:02:03.316941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.349 [2024-11-20 21:02:03.316989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.349 [2024-11-20 21:02:03.317000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:45.349 [2024-11-20 21:02:03.317009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:45.349 [2024-11-20 21:02:03.317016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.349 [2024-11-20 21:02:03.317042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.349 [2024-11-20 21:02:03.317050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:45.349 [2024-11-20 21:02:03.317061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:45.349 [2024-11-20 21:02:03.317070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.349 [2024-11-20 21:02:03.317101] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:45.349 [2024-11-20 21:02:03.317110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.349 [2024-11-20 21:02:03.317123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:45.349 [2024-11-20 21:02:03.317132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:45.349 [2024-11-20 21:02:03.317140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.349 [2024-11-20 21:02:03.320524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.349 [2024-11-20 21:02:03.320562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:45.349 [2024-11-20 21:02:03.320572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.363 ms 00:19:45.349 [2024-11-20 21:02:03.320583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.349 [2024-11-20 21:02:03.320653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.349 [2024-11-20 21:02:03.320664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:45.349 [2024-11-20 21:02:03.320673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:45.349 [2024-11-20 21:02:03.320682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.349 [2024-11-20 21:02:03.321476] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:45.349 [2024-11-20 21:02:03.322469] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.639 ms, result 0 00:19:45.349 [2024-11-20 21:02:03.323485] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:45.349 Some configs were skipped because the RPC state that can call them passed over. 00:19:45.349 21:02:03 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:45.610 [2024-11-20 21:02:03.541655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.610 [2024-11-20 21:02:03.541702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:45.610 [2024-11-20 21:02:03.541717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.512 ms 00:19:45.610 [2024-11-20 21:02:03.541725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.610 [2024-11-20 21:02:03.541769] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.621 ms, result 0 00:19:45.610 true 00:19:45.610 21:02:03 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:45.873 [2024-11-20 21:02:03.730506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.873 [2024-11-20 21:02:03.730548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:45.873 [2024-11-20 21:02:03.730558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:19:45.873 [2024-11-20 21:02:03.730567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.873 [2024-11-20 21:02:03.730600] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.217 ms, result 0 00:19:45.873 true 00:19:45.873 21:02:03 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 87602 00:19:45.873 21:02:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87602 ']' 00:19:45.873 21:02:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87602 00:19:45.873 21:02:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:45.873 21:02:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.873 21:02:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87602 00:19:45.873 killing process with pid 87602 00:19:45.873 21:02:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.873 21:02:03 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.873 21:02:03 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87602' 00:19:45.873 21:02:03 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87602 00:19:45.873 21:02:03 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87602 00:19:45.873 [2024-11-20 21:02:03.872057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.873 [2024-11-20 21:02:03.872103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:45.873 [2024-11-20 21:02:03.872117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:45.873 [2024-11-20 21:02:03.872126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.873 [2024-11-20 21:02:03.872151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:45.873 [2024-11-20 21:02:03.872564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.873 [2024-11-20 21:02:03.872595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:45.873 [2024-11-20 21:02:03.872607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:19:45.873 [2024-11-20 21:02:03.872617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.873 [2024-11-20 21:02:03.872947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.873 [2024-11-20 21:02:03.872967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:45.873 [2024-11-20 21:02:03.872977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:19:45.873 [2024-11-20 21:02:03.872988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.873 [2024-11-20 21:02:03.877463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.873 [2024-11-20 21:02:03.877496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:45.873 [2024-11-20 21:02:03.877505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.456 ms 00:19:45.873 [2024-11-20 21:02:03.877513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.873 [2024-11-20 21:02:03.884397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.873 [2024-11-20 21:02:03.884432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:45.873 [2024-11-20 21:02:03.884441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:19:45.874 [2024-11-20 21:02:03.884452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.874 [2024-11-20 21:02:03.886480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.874 [2024-11-20 21:02:03.886516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:45.874 [2024-11-20 21:02:03.886525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.963 ms 00:19:45.874 [2024-11-20 21:02:03.886534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.874 [2024-11-20 21:02:03.890267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.874 [2024-11-20 21:02:03.890311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:45.874 [2024-11-20 21:02:03.890320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.701 ms 00:19:45.874 [2024-11-20 21:02:03.890331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.874 [2024-11-20 21:02:03.890453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.874 [2024-11-20 21:02:03.890464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:45.874 [2024-11-20 21:02:03.890477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:45.874 [2024-11-20 21:02:03.890485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.874 [2024-11-20 21:02:03.892913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.874 [2024-11-20 21:02:03.892961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:45.874 [2024-11-20 21:02:03.892972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.408 ms 00:19:45.874 [2024-11-20 21:02:03.892984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.874 [2024-11-20 21:02:03.895057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.874 [2024-11-20 21:02:03.895096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:45.874 [2024-11-20 21:02:03.895105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.035 ms 00:19:45.874 [2024-11-20 21:02:03.895113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.874 [2024-11-20 21:02:03.896876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.874 [2024-11-20 21:02:03.896910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:45.874 [2024-11-20 21:02:03.896918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:19:45.874 [2024-11-20 21:02:03.896926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.874 [2024-11-20 21:02:03.898682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.874 [2024-11-20 21:02:03.898717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:45.874 [2024-11-20 21:02:03.898726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.698 ms 00:19:45.874 [2024-11-20 21:02:03.898734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.874 [2024-11-20 21:02:03.898773] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:45.874 [2024-11-20 21:02:03.898789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.898997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:45.874 [2024-11-20 21:02:03.899313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:45.875 [2024-11-20 21:02:03.899631] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:45.875 [2024-11-20 21:02:03.899639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a 00:19:45.875 [2024-11-20 21:02:03.899648] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:45.875 [2024-11-20 21:02:03.899658] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:45.875 [2024-11-20 21:02:03.899666] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:45.875 [2024-11-20 21:02:03.899673] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:45.875 [2024-11-20 21:02:03.899683] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:45.875 [2024-11-20 21:02:03.899690] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:45.875 [2024-11-20 21:02:03.899701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:45.875 [2024-11-20 21:02:03.899709] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:45.875 [2024-11-20 21:02:03.899716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:45.875 [2024-11-20 21:02:03.899724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.875 [2024-11-20 21:02:03.899733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:45.875 [2024-11-20 21:02:03.899742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.952 ms 00:19:45.875 [2024-11-20 21:02:03.899764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.901086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.875 [2024-11-20 21:02:03.901111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:45.875 [2024-11-20 21:02:03.901120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:19:45.875 [2024-11-20 21:02:03.901128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.901211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.875 [2024-11-20 21:02:03.901222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:45.875 [2024-11-20 21:02:03.901231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:45.875 [2024-11-20 21:02:03.901239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.906204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.906242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:45.875 [2024-11-20 21:02:03.906250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.875 [2024-11-20 21:02:03.906260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.906341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.906352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:45.875 [2024-11-20 21:02:03.906360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.875 [2024-11-20 21:02:03.906372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.906414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.906425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:45.875 [2024-11-20 21:02:03.906432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.875 [2024-11-20 21:02:03.906441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.906462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.906472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:45.875 [2024-11-20 21:02:03.906479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.875 [2024-11-20 21:02:03.906487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.915264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.915306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:45.875 [2024-11-20 21:02:03.915319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.875 [2024-11-20 21:02:03.915328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.921910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.921954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:45.875 [2024-11-20 21:02:03.921964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.875 [2024-11-20 21:02:03.921975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.922026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.922043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.875 [2024-11-20 21:02:03.922052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.875 [2024-11-20 21:02:03.922063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.922094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.922105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.875 [2024-11-20 21:02:03.922114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.875 [2024-11-20 21:02:03.922126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.922187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.922197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.875 [2024-11-20 21:02:03.922207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.875 [2024-11-20 21:02:03.922215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.922250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.922261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:45.875 [2024-11-20 21:02:03.922269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.875 [2024-11-20 21:02:03.922298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.875 [2024-11-20 21:02:03.922334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.875 [2024-11-20 21:02:03.922344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.875 [2024-11-20 21:02:03.922354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.876 [2024-11-20 21:02:03.922362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.876 [2024-11-20 21:02:03.922410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:45.876 [2024-11-20 21:02:03.922421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.876 [2024-11-20 21:02:03.922429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:45.876 [2024-11-20 21:02:03.922440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.876 [2024-11-20 21:02:03.922565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.486 ms, result 0 00:19:46.137 21:02:04 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:46.137 21:02:04 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:46.137 [2024-11-20 21:02:04.179638] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:19:46.137 [2024-11-20 21:02:04.179760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87638 ] 00:19:46.399 [2024-11-20 21:02:04.323169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.399 [2024-11-20 21:02:04.341675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.399 [2024-11-20 21:02:04.428702] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:46.399 [2024-11-20 21:02:04.428781] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:46.662 [2024-11-20 21:02:04.585960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.586011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:46.662 [2024-11-20 21:02:04.586024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:46.662 [2024-11-20 21:02:04.586036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.662 [2024-11-20 21:02:04.588364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.588406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:46.662 [2024-11-20 21:02:04.588416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.305 ms 00:19:46.662 [2024-11-20 21:02:04.588428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.662 [2024-11-20 21:02:04.588508] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:46.662 [2024-11-20 21:02:04.589177] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:46.662 [2024-11-20 21:02:04.589217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.589230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:46.662 [2024-11-20 21:02:04.589240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:19:46.662 [2024-11-20 21:02:04.589247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.662 [2024-11-20 21:02:04.590599] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:46.662 [2024-11-20 21:02:04.593532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.593571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:46.662 [2024-11-20 21:02:04.593586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.934 ms 00:19:46.662 [2024-11-20 21:02:04.593593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.662 [2024-11-20 21:02:04.593656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.593666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:46.662 [2024-11-20 21:02:04.593675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:46.662 [2024-11-20 21:02:04.593682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.662 [2024-11-20 21:02:04.599734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.599779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:46.662 [2024-11-20 21:02:04.599793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.013 ms 00:19:46.662 [2024-11-20 21:02:04.599801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.662 [2024-11-20 21:02:04.599917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.599933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:46.662 [2024-11-20 21:02:04.599945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:46.662 [2024-11-20 21:02:04.599953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.662 [2024-11-20 21:02:04.599980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.599990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:46.662 [2024-11-20 21:02:04.599998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:46.662 [2024-11-20 21:02:04.600006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.662 [2024-11-20 21:02:04.600032] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:46.662 [2024-11-20 21:02:04.601699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.601731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:46.662 [2024-11-20 21:02:04.601741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:19:46.662 [2024-11-20 21:02:04.601771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.662 [2024-11-20 21:02:04.601821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.601831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:46.662 [2024-11-20 21:02:04.601840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:46.662 [2024-11-20 21:02:04.601854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.662 [2024-11-20 21:02:04.601871] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:46.662 [2024-11-20 21:02:04.601889] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:46.662 [2024-11-20 21:02:04.601927] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:46.662 [2024-11-20 21:02:04.601945] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:46.662 [2024-11-20 21:02:04.602048] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:46.662 [2024-11-20 21:02:04.602066] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:46.662 [2024-11-20 21:02:04.602078] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:46.662 [2024-11-20 21:02:04.602088] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:46.662 [2024-11-20 21:02:04.602097] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:46.662 [2024-11-20 21:02:04.602109] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:46.662 [2024-11-20 21:02:04.602116] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:46.662 [2024-11-20 21:02:04.602123] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:46.662 [2024-11-20 21:02:04.602132] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:46.662 [2024-11-20 21:02:04.602140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.662 [2024-11-20 21:02:04.602151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:46.663 [2024-11-20 21:02:04.602159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:19:46.663 [2024-11-20 21:02:04.602166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.663 [2024-11-20 21:02:04.602253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.663 [2024-11-20 21:02:04.602260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:46.663 [2024-11-20 21:02:04.602282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:46.663 [2024-11-20 21:02:04.602289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.663 [2024-11-20 21:02:04.602392] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:46.663 [2024-11-20 21:02:04.602403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:46.663 [2024-11-20 21:02:04.602414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.663 [2024-11-20 21:02:04.602422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:46.663 [2024-11-20 21:02:04.602443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:46.663 [2024-11-20 21:02:04.602463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:46.663 [2024-11-20 21:02:04.602471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.663 [2024-11-20 21:02:04.602487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:46.663 [2024-11-20 21:02:04.602496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:46.663 [2024-11-20 21:02:04.602504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:46.663 [2024-11-20 21:02:04.602512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:46.663 [2024-11-20 21:02:04.602520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:46.663 [2024-11-20 21:02:04.602528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:46.663 [2024-11-20 21:02:04.602544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:46.663 [2024-11-20 21:02:04.602551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:46.663 [2024-11-20 21:02:04.602566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.663 [2024-11-20 21:02:04.602581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:46.663 [2024-11-20 21:02:04.602593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.663 [2024-11-20 21:02:04.602608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:46.663 [2024-11-20 21:02:04.602615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.663 [2024-11-20 21:02:04.602630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:46.663 [2024-11-20 21:02:04.602638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:46.663 [2024-11-20 21:02:04.602653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:46.663 [2024-11-20 21:02:04.602660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.663 [2024-11-20 21:02:04.602675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:46.663 [2024-11-20 21:02:04.602683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:46.663 [2024-11-20 21:02:04.602690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:46.663 [2024-11-20 21:02:04.602698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:46.663 [2024-11-20 21:02:04.602705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:46.663 [2024-11-20 21:02:04.602714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:46.663 [2024-11-20 21:02:04.602730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:46.663 [2024-11-20 21:02:04.602738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602759] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:46.663 [2024-11-20 21:02:04.602772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:46.663 [2024-11-20 21:02:04.602781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:46.663 [2024-11-20 21:02:04.602789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:46.663 [2024-11-20 21:02:04.602797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:46.663 [2024-11-20 21:02:04.602804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:46.663 [2024-11-20 21:02:04.602811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:46.663 [2024-11-20 21:02:04.602818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:46.663 [2024-11-20 21:02:04.602826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:46.663 [2024-11-20 21:02:04.602833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:46.663 [2024-11-20 21:02:04.602841] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:46.663 [2024-11-20 21:02:04.602850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.663 [2024-11-20 21:02:04.602861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:46.663 [2024-11-20 21:02:04.602868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:46.663 [2024-11-20 21:02:04.602876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:46.663 [2024-11-20 21:02:04.602884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:46.663 [2024-11-20 21:02:04.602891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:46.663 [2024-11-20 21:02:04.602898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:46.663 [2024-11-20 21:02:04.602905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:46.663 [2024-11-20 21:02:04.602912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:46.663 [2024-11-20 21:02:04.602919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:46.663 [2024-11-20 21:02:04.602931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:46.663 [2024-11-20 21:02:04.602938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:46.663 [2024-11-20 21:02:04.602945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:46.663 [2024-11-20 21:02:04.602951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:46.663 [2024-11-20 21:02:04.602959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:46.663 [2024-11-20 21:02:04.602965] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:46.663 [2024-11-20 21:02:04.602973] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:46.664 [2024-11-20 21:02:04.602984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:46.664 [2024-11-20 21:02:04.602991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:46.664 [2024-11-20 21:02:04.602998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:46.664 [2024-11-20 21:02:04.603005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:46.664 [2024-11-20 21:02:04.603013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.603021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:46.664 [2024-11-20 21:02:04.603028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.690 ms 00:19:46.664 [2024-11-20 21:02:04.603035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.613980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.614016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:46.664 [2024-11-20 21:02:04.614026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.896 ms 00:19:46.664 [2024-11-20 21:02:04.614034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.614161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.614172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:46.664 [2024-11-20 21:02:04.614183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:46.664 [2024-11-20 21:02:04.614194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.633951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.634000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:46.664 [2024-11-20 21:02:04.634012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.731 ms 00:19:46.664 [2024-11-20 21:02:04.634020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.634104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.634119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:46.664 [2024-11-20 21:02:04.634128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:46.664 [2024-11-20 21:02:04.634142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.634585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.634620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:46.664 [2024-11-20 21:02:04.634637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:19:46.664 [2024-11-20 21:02:04.634662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.634880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.634897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:46.664 [2024-11-20 21:02:04.634914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:19:46.664 [2024-11-20 21:02:04.634927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.642264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.642322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:46.664 [2024-11-20 21:02:04.642341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.304 ms 00:19:46.664 [2024-11-20 21:02:04.642358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.645880] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:46.664 [2024-11-20 21:02:04.645930] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:46.664 [2024-11-20 21:02:04.645946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.645957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:46.664 [2024-11-20 21:02:04.645970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.468 ms 00:19:46.664 [2024-11-20 21:02:04.645981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.661361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.661401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:46.664 [2024-11-20 21:02:04.661412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.316 ms 00:19:46.664 [2024-11-20 21:02:04.661421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.663839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.663877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:46.664 [2024-11-20 21:02:04.663886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:19:46.664 [2024-11-20 21:02:04.663893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.666137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.666175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:46.664 [2024-11-20 21:02:04.666184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.188 ms 00:19:46.664 [2024-11-20 21:02:04.666191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.666550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.666567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:46.664 [2024-11-20 21:02:04.666577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:19:46.664 [2024-11-20 21:02:04.666590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.686225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.686284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:46.664 [2024-11-20 21:02:04.686297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.611 ms 00:19:46.664 [2024-11-20 21:02:04.686305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.694126] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:46.664 [2024-11-20 21:02:04.711149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.711195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:46.664 [2024-11-20 21:02:04.711207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.760 ms 00:19:46.664 [2024-11-20 21:02:04.711226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.711310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.711321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:46.664 [2024-11-20 21:02:04.711330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:46.664 [2024-11-20 21:02:04.711341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.711395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.711405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:46.664 [2024-11-20 21:02:04.711414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:46.664 [2024-11-20 21:02:04.711431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.711454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.711462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:46.664 [2024-11-20 21:02:04.711471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:46.664 [2024-11-20 21:02:04.711478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.711515] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:46.664 [2024-11-20 21:02:04.711526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.711533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:46.664 [2024-11-20 21:02:04.711540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:46.664 [2024-11-20 21:02:04.711547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.716831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.716873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:46.664 [2024-11-20 21:02:04.716884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.264 ms 00:19:46.664 [2024-11-20 21:02:04.716892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.664 [2024-11-20 21:02:04.716983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:46.664 [2024-11-20 21:02:04.716993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:46.664 [2024-11-20 21:02:04.717003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:46.665 [2024-11-20 21:02:04.717011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:46.665 [2024-11-20 21:02:04.717977] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.665 [2024-11-20 21:02:04.719222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.677 ms, result 0 00:19:46.665 [2024-11-20 21:02:04.720386] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:46.665 [2024-11-20 21:02:04.727839] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:48.052  [2024-11-20T21:02:06.744Z] Copying: 23/256 [MB] (23 MBps) [2024-11-20T21:02:08.134Z] Copying: 42/256 [MB] (19 MBps) [2024-11-20T21:02:09.080Z] Copying: 56/256 [MB] (13 MBps) [2024-11-20T21:02:10.017Z] Copying: 70/256 [MB] (14 MBps) [2024-11-20T21:02:10.952Z] Copying: 84/256 [MB] (13 MBps) [2024-11-20T21:02:11.888Z] Copying: 103/256 [MB] (19 MBps) [2024-11-20T21:02:12.822Z] Copying: 119/256 [MB] (16 MBps) [2024-11-20T21:02:13.756Z] Copying: 138/256 [MB] (18 MBps) [2024-11-20T21:02:15.129Z] Copying: 158/256 [MB] (19 MBps) [2024-11-20T21:02:16.069Z] Copying: 175/256 [MB] (17 MBps) [2024-11-20T21:02:17.010Z] Copying: 193/256 [MB] (17 MBps) [2024-11-20T21:02:17.951Z] Copying: 208/256 [MB] (15 MBps) [2024-11-20T21:02:18.891Z] Copying: 227/256 [MB] (19 MBps) [2024-11-20T21:02:19.834Z] Copying: 243/256 [MB] (15 MBps) [2024-11-20T21:02:19.834Z] Copying: 256/256 [MB] (average 17 MBps)[2024-11-20 21:02:19.641646] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:01.715 [2024-11-20 21:02:19.643719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.715 [2024-11-20 21:02:19.643966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:01.715 [2024-11-20 21:02:19.644154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:01.715 [2024-11-20 21:02:19.644258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.715 [2024-11-20 21:02:19.644335] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:01.715 [2024-11-20 21:02:19.645243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.715 [2024-11-20 21:02:19.645428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:01.715 [2024-11-20 21:02:19.645512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:20:01.715 [2024-11-20 21:02:19.645541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.715 [2024-11-20 21:02:19.645962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.715 [2024-11-20 21:02:19.646085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:01.715 [2024-11-20 21:02:19.646164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:20:01.715 [2024-11-20 21:02:19.646201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.715 [2024-11-20 21:02:19.650144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.715 [2024-11-20 21:02:19.650290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:01.716 [2024-11-20 21:02:19.650367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.731 ms 00:20:01.716 [2024-11-20 21:02:19.650402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.716 [2024-11-20 21:02:19.657725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.716 [2024-11-20 21:02:19.657906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:01.716 [2024-11-20 21:02:19.657978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.140 ms 00:20:01.716 [2024-11-20 21:02:19.658002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.716 [2024-11-20 21:02:19.660903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.716 [2024-11-20 21:02:19.661073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:01.716 [2024-11-20 21:02:19.661151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:20:01.716 [2024-11-20 21:02:19.661175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.716 [2024-11-20 21:02:19.666455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.716 [2024-11-20 21:02:19.666636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:01.716 [2024-11-20 21:02:19.666715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.110 ms 00:20:01.716 [2024-11-20 21:02:19.666763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.716 [2024-11-20 21:02:19.667020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.716 [2024-11-20 21:02:19.667125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:01.716 [2024-11-20 21:02:19.667189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:20:01.716 [2024-11-20 21:02:19.667291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.716 [2024-11-20 21:02:19.670549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.716 [2024-11-20 21:02:19.670722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:01.716 [2024-11-20 21:02:19.670864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.189 ms 00:20:01.716 [2024-11-20 21:02:19.670902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.716 [2024-11-20 21:02:19.673464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.716 [2024-11-20 21:02:19.673515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:01.716 [2024-11-20 21:02:19.673525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.440 ms 00:20:01.716 [2024-11-20 21:02:19.673533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.716 [2024-11-20 21:02:19.676037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.716 [2024-11-20 21:02:19.676084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:01.716 [2024-11-20 21:02:19.676094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.458 ms 00:20:01.716 [2024-11-20 21:02:19.676102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.716 [2024-11-20 21:02:19.678330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.716 [2024-11-20 21:02:19.678384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:01.716 [2024-11-20 21:02:19.678398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.149 ms 00:20:01.716 [2024-11-20 21:02:19.678410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.716 [2024-11-20 21:02:19.678462] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:01.716 [2024-11-20 21:02:19.678478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.678992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.679000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.679011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.679023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.679035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.679046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:01.716 [2024-11-20 21:02:19.679061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:01.717 [2024-11-20 21:02:19.679449] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:01.717 [2024-11-20 21:02:19.679457] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a 00:20:01.717 [2024-11-20 21:02:19.679469] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:01.717 [2024-11-20 21:02:19.679488] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:01.717 [2024-11-20 21:02:19.679500] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:01.717 [2024-11-20 21:02:19.679513] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:01.717 [2024-11-20 21:02:19.679524] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:01.717 [2024-11-20 21:02:19.679536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:01.717 [2024-11-20 21:02:19.679555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:01.717 [2024-11-20 21:02:19.679562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:01.717 [2024-11-20 21:02:19.679569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:01.717 [2024-11-20 21:02:19.679577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.717 [2024-11-20 21:02:19.679588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:01.717 [2024-11-20 21:02:19.679597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:20:01.717 [2024-11-20 21:02:19.679613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.682054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.717 [2024-11-20 21:02:19.682100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:01.717 [2024-11-20 21:02:19.682112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.410 ms 00:20:01.717 [2024-11-20 21:02:19.682120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.682318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.717 [2024-11-20 21:02:19.682333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:01.717 [2024-11-20 21:02:19.682343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:01.717 [2024-11-20 21:02:19.682351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.690392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.717 [2024-11-20 21:02:19.690436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.717 [2024-11-20 21:02:19.690448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.717 [2024-11-20 21:02:19.690457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.690535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.717 [2024-11-20 21:02:19.690546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.717 [2024-11-20 21:02:19.690555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.717 [2024-11-20 21:02:19.690568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.690625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.717 [2024-11-20 21:02:19.690636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.717 [2024-11-20 21:02:19.690645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.717 [2024-11-20 21:02:19.690654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.690674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.717 [2024-11-20 21:02:19.690688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.717 [2024-11-20 21:02:19.690698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.717 [2024-11-20 21:02:19.690706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.705212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.717 [2024-11-20 21:02:19.705258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.717 [2024-11-20 21:02:19.705269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.717 [2024-11-20 21:02:19.705278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.717243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.717 [2024-11-20 21:02:19.717306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.717 [2024-11-20 21:02:19.717320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.717 [2024-11-20 21:02:19.717329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.717395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.717 [2024-11-20 21:02:19.717405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.717 [2024-11-20 21:02:19.717415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.717 [2024-11-20 21:02:19.717424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.717457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.717 [2024-11-20 21:02:19.717467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.717 [2024-11-20 21:02:19.717480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.717 [2024-11-20 21:02:19.717488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.717561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.717 [2024-11-20 21:02:19.717573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.717 [2024-11-20 21:02:19.717582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.717 [2024-11-20 21:02:19.717591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.717637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.717 [2024-11-20 21:02:19.717647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:01.717 [2024-11-20 21:02:19.717659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.717 [2024-11-20 21:02:19.717669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.717 [2024-11-20 21:02:19.717711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.718 [2024-11-20 21:02:19.717722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.718 [2024-11-20 21:02:19.717732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.718 [2024-11-20 21:02:19.717740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.718 [2024-11-20 21:02:19.717822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:01.718 [2024-11-20 21:02:19.717835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.718 [2024-11-20 21:02:19.717849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:01.718 [2024-11-20 21:02:19.717858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.718 [2024-11-20 21:02:19.718018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 74.269 ms, result 0 00:20:01.978 00:20:01.978 00:20:01.978 21:02:19 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:01.978 21:02:19 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:02.550 21:02:20 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:02.550 [2024-11-20 21:02:20.634811] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:20:02.550 [2024-11-20 21:02:20.634953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87840 ] 00:20:02.812 [2024-11-20 21:02:20.784161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.812 [2024-11-20 21:02:20.812995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.073 [2024-11-20 21:02:20.932594] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:03.073 [2024-11-20 21:02:20.932680] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:03.073 [2024-11-20 21:02:21.094189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.073 [2024-11-20 21:02:21.094256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:03.073 [2024-11-20 21:02:21.094283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:03.073 [2024-11-20 21:02:21.094293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.073 [2024-11-20 21:02:21.096984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.073 [2024-11-20 21:02:21.097039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.073 [2024-11-20 21:02:21.097052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:20:03.073 [2024-11-20 21:02:21.097066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.073 [2024-11-20 21:02:21.097186] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:03.073 [2024-11-20 21:02:21.097489] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:03.073 [2024-11-20 21:02:21.097525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.073 [2024-11-20 21:02:21.097539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.073 [2024-11-20 21:02:21.097551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:20:03.073 [2024-11-20 21:02:21.097560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.073 [2024-11-20 21:02:21.099555] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:03.073 [2024-11-20 21:02:21.103420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.073 [2024-11-20 21:02:21.103473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:03.073 [2024-11-20 21:02:21.103490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.868 ms 00:20:03.073 [2024-11-20 21:02:21.103498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.073 [2024-11-20 21:02:21.103581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.073 [2024-11-20 21:02:21.103592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:03.073 [2024-11-20 21:02:21.103602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:03.073 [2024-11-20 21:02:21.103609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.073 [2024-11-20 21:02:21.111718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.073 [2024-11-20 21:02:21.111789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.073 [2024-11-20 21:02:21.111800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.066 ms 00:20:03.073 [2024-11-20 21:02:21.111808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.073 [2024-11-20 21:02:21.111956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.073 [2024-11-20 21:02:21.111969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.073 [2024-11-20 21:02:21.111979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:03.073 [2024-11-20 21:02:21.111992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.073 [2024-11-20 21:02:21.112021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.073 [2024-11-20 21:02:21.112030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:03.073 [2024-11-20 21:02:21.112039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:03.073 [2024-11-20 21:02:21.112052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.073 [2024-11-20 21:02:21.112074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:03.073 [2024-11-20 21:02:21.114159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.073 [2024-11-20 21:02:21.114204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.073 [2024-11-20 21:02:21.114214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.091 ms 00:20:03.074 [2024-11-20 21:02:21.114222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.074 [2024-11-20 21:02:21.114302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.074 [2024-11-20 21:02:21.114312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:03.074 [2024-11-20 21:02:21.114321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:03.074 [2024-11-20 21:02:21.114334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.074 [2024-11-20 21:02:21.114355] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:03.074 [2024-11-20 21:02:21.114376] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:03.074 [2024-11-20 21:02:21.114421] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:03.074 [2024-11-20 21:02:21.114440] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:03.074 [2024-11-20 21:02:21.114546] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:03.074 [2024-11-20 21:02:21.114559] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:03.074 [2024-11-20 21:02:21.114573] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:03.074 [2024-11-20 21:02:21.114584] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:03.074 [2024-11-20 21:02:21.114594] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:03.074 [2024-11-20 21:02:21.114602] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:03.074 [2024-11-20 21:02:21.114609] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:03.074 [2024-11-20 21:02:21.114617] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:03.074 [2024-11-20 21:02:21.114626] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:03.074 [2024-11-20 21:02:21.114635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.074 [2024-11-20 21:02:21.114647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:03.074 [2024-11-20 21:02:21.114657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:03.074 [2024-11-20 21:02:21.114665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.074 [2024-11-20 21:02:21.114788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.074 [2024-11-20 21:02:21.114807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:03.074 [2024-11-20 21:02:21.114816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:03.074 [2024-11-20 21:02:21.114825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.074 [2024-11-20 21:02:21.114930] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:03.074 [2024-11-20 21:02:21.114944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:03.074 [2024-11-20 21:02:21.114958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:03.074 [2024-11-20 21:02:21.114968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.074 [2024-11-20 21:02:21.114981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:03.074 [2024-11-20 21:02:21.114989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:03.074 [2024-11-20 21:02:21.114998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:03.074 [2024-11-20 21:02:21.115013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:03.074 [2024-11-20 21:02:21.115023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:03.074 [2024-11-20 21:02:21.115038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:03.074 [2024-11-20 21:02:21.115047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:03.074 [2024-11-20 21:02:21.115054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:03.074 [2024-11-20 21:02:21.115063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:03.074 [2024-11-20 21:02:21.115075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:03.074 [2024-11-20 21:02:21.115083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:03.074 [2024-11-20 21:02:21.115098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:03.074 [2024-11-20 21:02:21.115105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:03.074 [2024-11-20 21:02:21.115121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.074 [2024-11-20 21:02:21.115137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:03.074 [2024-11-20 21:02:21.115151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.074 [2024-11-20 21:02:21.115166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:03.074 [2024-11-20 21:02:21.115174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.074 [2024-11-20 21:02:21.115191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:03.074 [2024-11-20 21:02:21.115200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:03.074 [2024-11-20 21:02:21.115215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:03.074 [2024-11-20 21:02:21.115222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:03.074 [2024-11-20 21:02:21.115237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:03.074 [2024-11-20 21:02:21.115245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:03.074 [2024-11-20 21:02:21.115254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:03.074 [2024-11-20 21:02:21.115261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:03.074 [2024-11-20 21:02:21.115268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:03.074 [2024-11-20 21:02:21.115277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:03.074 [2024-11-20 21:02:21.115291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:03.074 [2024-11-20 21:02:21.115297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115304] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:03.074 [2024-11-20 21:02:21.115312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:03.074 [2024-11-20 21:02:21.115321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:03.074 [2024-11-20 21:02:21.115330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:03.074 [2024-11-20 21:02:21.115338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:03.074 [2024-11-20 21:02:21.115345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:03.074 [2024-11-20 21:02:21.115351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:03.074 [2024-11-20 21:02:21.115358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:03.074 [2024-11-20 21:02:21.115364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:03.074 [2024-11-20 21:02:21.115372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:03.074 [2024-11-20 21:02:21.115381] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:03.074 [2024-11-20 21:02:21.115390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:03.074 [2024-11-20 21:02:21.115400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:03.074 [2024-11-20 21:02:21.115408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:03.074 [2024-11-20 21:02:21.115416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:03.074 [2024-11-20 21:02:21.115424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:03.074 [2024-11-20 21:02:21.115431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:03.074 [2024-11-20 21:02:21.115437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:03.074 [2024-11-20 21:02:21.115444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:03.074 [2024-11-20 21:02:21.115451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:03.074 [2024-11-20 21:02:21.115458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:03.074 [2024-11-20 21:02:21.115472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:03.074 [2024-11-20 21:02:21.115480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:03.074 [2024-11-20 21:02:21.115488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:03.074 [2024-11-20 21:02:21.115494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:03.074 [2024-11-20 21:02:21.115501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:03.074 [2024-11-20 21:02:21.115508] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:03.074 [2024-11-20 21:02:21.115516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:03.074 [2024-11-20 21:02:21.115530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:03.075 [2024-11-20 21:02:21.115540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:03.075 [2024-11-20 21:02:21.115547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:03.075 [2024-11-20 21:02:21.115554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:03.075 [2024-11-20 21:02:21.115562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.115569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:03.075 [2024-11-20 21:02:21.115577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:20:03.075 [2024-11-20 21:02:21.115591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.129634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.129684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.075 [2024-11-20 21:02:21.129697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.989 ms 00:20:03.075 [2024-11-20 21:02:21.129705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.129861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.129875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:03.075 [2024-11-20 21:02:21.129888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:03.075 [2024-11-20 21:02:21.129896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.149801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.149853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.075 [2024-11-20 21:02:21.149867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.880 ms 00:20:03.075 [2024-11-20 21:02:21.149884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.149984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.150001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.075 [2024-11-20 21:02:21.150011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:03.075 [2024-11-20 21:02:21.150020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.150624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.150671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.075 [2024-11-20 21:02:21.150686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:20:03.075 [2024-11-20 21:02:21.150697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.150916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.150933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.075 [2024-11-20 21:02:21.150948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:20:03.075 [2024-11-20 21:02:21.150967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.159915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.159963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.075 [2024-11-20 21:02:21.159983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.919 ms 00:20:03.075 [2024-11-20 21:02:21.159993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.164018] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:03.075 [2024-11-20 21:02:21.164074] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:03.075 [2024-11-20 21:02:21.164086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.164095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:03.075 [2024-11-20 21:02:21.164105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.954 ms 00:20:03.075 [2024-11-20 21:02:21.164112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.180263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.180315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:03.075 [2024-11-20 21:02:21.180327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.073 ms 00:20:03.075 [2024-11-20 21:02:21.180335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.183155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.183206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:03.075 [2024-11-20 21:02:21.183216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.726 ms 00:20:03.075 [2024-11-20 21:02:21.183224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.185869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.185916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:03.075 [2024-11-20 21:02:21.185926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:20:03.075 [2024-11-20 21:02:21.185934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.075 [2024-11-20 21:02:21.186346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.075 [2024-11-20 21:02:21.186419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:03.075 [2024-11-20 21:02:21.186430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:20:03.075 [2024-11-20 21:02:21.186438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.337 [2024-11-20 21:02:21.211802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.337 [2024-11-20 21:02:21.211869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:03.337 [2024-11-20 21:02:21.211886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.338 ms 00:20:03.337 [2024-11-20 21:02:21.211895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.337 [2024-11-20 21:02:21.220111] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:03.337 [2024-11-20 21:02:21.239432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.337 [2024-11-20 21:02:21.239484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:03.337 [2024-11-20 21:02:21.239506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.437 ms 00:20:03.337 [2024-11-20 21:02:21.239521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.337 [2024-11-20 21:02:21.239612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.337 [2024-11-20 21:02:21.239624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:03.337 [2024-11-20 21:02:21.239634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:03.337 [2024-11-20 21:02:21.239648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.337 [2024-11-20 21:02:21.239707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.337 [2024-11-20 21:02:21.239722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:03.338 [2024-11-20 21:02:21.239731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:03.338 [2024-11-20 21:02:21.239739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.338 [2024-11-20 21:02:21.239797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.338 [2024-11-20 21:02:21.239806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:03.338 [2024-11-20 21:02:21.239820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:03.338 [2024-11-20 21:02:21.239830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.338 [2024-11-20 21:02:21.239870] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:03.338 [2024-11-20 21:02:21.239883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.338 [2024-11-20 21:02:21.239891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:03.338 [2024-11-20 21:02:21.239899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:03.338 [2024-11-20 21:02:21.239907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.338 [2024-11-20 21:02:21.246026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.338 [2024-11-20 21:02:21.246080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:03.338 [2024-11-20 21:02:21.246092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.090 ms 00:20:03.338 [2024-11-20 21:02:21.246100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.338 [2024-11-20 21:02:21.246208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.338 [2024-11-20 21:02:21.246220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:03.338 [2024-11-20 21:02:21.246235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:03.338 [2024-11-20 21:02:21.246243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.338 [2024-11-20 21:02:21.247863] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:03.338 [2024-11-20 21:02:21.249289] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.330 ms, result 0 00:20:03.338 [2024-11-20 21:02:21.250516] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:03.338 [2024-11-20 21:02:21.258004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:03.601  [2024-11-20T21:02:21.720Z] Copying: 4096/4096 [kB] (average 17 MBps)[2024-11-20 21:02:21.490419] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:03.601 [2024-11-20 21:02:21.491515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.601 [2024-11-20 21:02:21.491572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:03.601 [2024-11-20 21:02:21.491584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:03.601 [2024-11-20 21:02:21.491593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.601 [2024-11-20 21:02:21.491615] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:03.601 [2024-11-20 21:02:21.492324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.601 [2024-11-20 21:02:21.492365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:03.601 [2024-11-20 21:02:21.492377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.696 ms 00:20:03.601 [2024-11-20 21:02:21.492386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.601 [2024-11-20 21:02:21.494446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.601 [2024-11-20 21:02:21.494493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:03.601 [2024-11-20 21:02:21.494503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:20:03.601 [2024-11-20 21:02:21.494519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.601 [2024-11-20 21:02:21.498790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.601 [2024-11-20 21:02:21.498828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:03.601 [2024-11-20 21:02:21.498838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.254 ms 00:20:03.601 [2024-11-20 21:02:21.498855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.601 [2024-11-20 21:02:21.505855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.601 [2024-11-20 21:02:21.505899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:03.601 [2024-11-20 21:02:21.505909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.963 ms 00:20:03.601 [2024-11-20 21:02:21.505918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.601 [2024-11-20 21:02:21.508923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.601 [2024-11-20 21:02:21.508976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:03.601 [2024-11-20 21:02:21.508986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.941 ms 00:20:03.601 [2024-11-20 21:02:21.508994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.601 [2024-11-20 21:02:21.514308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.601 [2024-11-20 21:02:21.514369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:03.601 [2024-11-20 21:02:21.514380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.269 ms 00:20:03.601 [2024-11-20 21:02:21.514390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.601 [2024-11-20 21:02:21.514525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.601 [2024-11-20 21:02:21.514537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:03.601 [2024-11-20 21:02:21.514547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:03.601 [2024-11-20 21:02:21.514560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.601 [2024-11-20 21:02:21.518037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.601 [2024-11-20 21:02:21.518086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:03.601 [2024-11-20 21:02:21.518096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.456 ms 00:20:03.601 [2024-11-20 21:02:21.518103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.601 [2024-11-20 21:02:21.520543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.602 [2024-11-20 21:02:21.520595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:03.602 [2024-11-20 21:02:21.520605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.394 ms 00:20:03.602 [2024-11-20 21:02:21.520613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.602 [2024-11-20 21:02:21.522908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.602 [2024-11-20 21:02:21.522957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:03.602 [2024-11-20 21:02:21.522967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.250 ms 00:20:03.602 [2024-11-20 21:02:21.522975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.602 [2024-11-20 21:02:21.525422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.602 [2024-11-20 21:02:21.525472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:03.602 [2024-11-20 21:02:21.525482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.376 ms 00:20:03.602 [2024-11-20 21:02:21.525489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.602 [2024-11-20 21:02:21.525530] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:03.602 [2024-11-20 21:02:21.525545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.525996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:03.602 [2024-11-20 21:02:21.526174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:03.603 [2024-11-20 21:02:21.526413] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:03.603 [2024-11-20 21:02:21.526422] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a 00:20:03.603 [2024-11-20 21:02:21.526431] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:03.603 [2024-11-20 21:02:21.526439] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:03.603 [2024-11-20 21:02:21.526447] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:03.603 [2024-11-20 21:02:21.526460] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:03.603 [2024-11-20 21:02:21.526468] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:03.603 [2024-11-20 21:02:21.526477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:03.603 [2024-11-20 21:02:21.526488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:03.603 [2024-11-20 21:02:21.526494] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:03.603 [2024-11-20 21:02:21.526501] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:03.603 [2024-11-20 21:02:21.526509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.603 [2024-11-20 21:02:21.526518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:03.603 [2024-11-20 21:02:21.526528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:20:03.603 [2024-11-20 21:02:21.526536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.528620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.603 [2024-11-20 21:02:21.528660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:03.603 [2024-11-20 21:02:21.528671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.051 ms 00:20:03.603 [2024-11-20 21:02:21.528679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.528816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.603 [2024-11-20 21:02:21.528828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:03.603 [2024-11-20 21:02:21.528838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:20:03.603 [2024-11-20 21:02:21.528845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.536563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.536616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.603 [2024-11-20 21:02:21.536627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.536645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.536728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.536741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.603 [2024-11-20 21:02:21.536768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.536775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.536823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.536835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.603 [2024-11-20 21:02:21.536843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.536851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.536871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.536880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.603 [2024-11-20 21:02:21.536888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.536895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.550106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.550162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.603 [2024-11-20 21:02:21.550173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.550182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.560142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.560190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.603 [2024-11-20 21:02:21.560201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.560210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.560257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.560265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.603 [2024-11-20 21:02:21.560274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.560291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.560323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.560337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.603 [2024-11-20 21:02:21.560345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.560354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.560425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.560436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.603 [2024-11-20 21:02:21.560445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.560452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.560491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.560503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:03.603 [2024-11-20 21:02:21.560514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.560521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.560561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.560571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.603 [2024-11-20 21:02:21.560581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.560588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.560634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.603 [2024-11-20 21:02:21.560648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.603 [2024-11-20 21:02:21.560657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.603 [2024-11-20 21:02:21.560665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.603 [2024-11-20 21:02:21.560842] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.294 ms, result 0 00:20:03.865 00:20:03.865 00:20:03.865 21:02:21 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=87854 00:20:03.865 21:02:21 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 87854 00:20:03.865 21:02:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87854 ']' 00:20:03.865 21:02:21 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:03.865 21:02:21 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.865 21:02:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.865 21:02:21 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.865 21:02:21 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.865 21:02:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:03.865 [2024-11-20 21:02:21.841561] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:20:03.865 [2024-11-20 21:02:21.841717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87854 ] 00:20:04.127 [2024-11-20 21:02:21.990795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.127 [2024-11-20 21:02:22.021518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.700 21:02:22 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.700 21:02:22 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:04.700 21:02:22 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:04.961 [2024-11-20 21:02:22.911816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:04.961 [2024-11-20 21:02:22.911900] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:05.225 [2024-11-20 21:02:23.090369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.225 [2024-11-20 21:02:23.090434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:05.225 [2024-11-20 21:02:23.090453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:05.225 [2024-11-20 21:02:23.090465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.225 [2024-11-20 21:02:23.093124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.225 [2024-11-20 21:02:23.093181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:05.225 [2024-11-20 21:02:23.093192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.638 ms 00:20:05.225 [2024-11-20 21:02:23.093202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.225 [2024-11-20 21:02:23.093328] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:05.225 [2024-11-20 21:02:23.093612] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:05.225 [2024-11-20 21:02:23.093631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.225 [2024-11-20 21:02:23.093648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:05.225 [2024-11-20 21:02:23.093661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:20:05.225 [2024-11-20 21:02:23.093671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.225 [2024-11-20 21:02:23.095650] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:05.225 [2024-11-20 21:02:23.099635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.225 [2024-11-20 21:02:23.099686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:05.225 [2024-11-20 21:02:23.099700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.981 ms 00:20:05.225 [2024-11-20 21:02:23.099709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.225 [2024-11-20 21:02:23.099812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.225 [2024-11-20 21:02:23.099827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:05.225 [2024-11-20 21:02:23.099842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:05.225 [2024-11-20 21:02:23.099850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.225 [2024-11-20 21:02:23.108350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.225 [2024-11-20 21:02:23.108395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:05.225 [2024-11-20 21:02:23.108409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.438 ms 00:20:05.225 [2024-11-20 21:02:23.108417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.225 [2024-11-20 21:02:23.108554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.225 [2024-11-20 21:02:23.108567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:05.225 [2024-11-20 21:02:23.108579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:05.225 [2024-11-20 21:02:23.108595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.226 [2024-11-20 21:02:23.108623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.226 [2024-11-20 21:02:23.108634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:05.226 [2024-11-20 21:02:23.108648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:05.226 [2024-11-20 21:02:23.108655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.226 [2024-11-20 21:02:23.108685] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:05.226 [2024-11-20 21:02:23.110857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.226 [2024-11-20 21:02:23.110908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:05.226 [2024-11-20 21:02:23.110918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:20:05.226 [2024-11-20 21:02:23.110932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.226 [2024-11-20 21:02:23.110972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.226 [2024-11-20 21:02:23.110983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:05.226 [2024-11-20 21:02:23.110992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:05.226 [2024-11-20 21:02:23.111001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.226 [2024-11-20 21:02:23.111022] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:05.226 [2024-11-20 21:02:23.111046] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:05.226 [2024-11-20 21:02:23.111085] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:05.226 [2024-11-20 21:02:23.111108] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:05.226 [2024-11-20 21:02:23.111216] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:05.226 [2024-11-20 21:02:23.111232] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:05.226 [2024-11-20 21:02:23.111246] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:05.226 [2024-11-20 21:02:23.111259] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:05.226 [2024-11-20 21:02:23.111269] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:05.226 [2024-11-20 21:02:23.111284] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:05.226 [2024-11-20 21:02:23.111294] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:05.226 [2024-11-20 21:02:23.111306] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:05.226 [2024-11-20 21:02:23.111319] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:05.226 [2024-11-20 21:02:23.111330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.226 [2024-11-20 21:02:23.111339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:05.226 [2024-11-20 21:02:23.111350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:20:05.226 [2024-11-20 21:02:23.111360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.226 [2024-11-20 21:02:23.111449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.226 [2024-11-20 21:02:23.111468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:05.226 [2024-11-20 21:02:23.111480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:05.226 [2024-11-20 21:02:23.111487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.226 [2024-11-20 21:02:23.111592] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:05.226 [2024-11-20 21:02:23.111613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:05.226 [2024-11-20 21:02:23.111626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:05.226 [2024-11-20 21:02:23.111635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.226 [2024-11-20 21:02:23.111649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:05.226 [2024-11-20 21:02:23.111660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:05.226 [2024-11-20 21:02:23.111670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:05.226 [2024-11-20 21:02:23.111678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:05.226 [2024-11-20 21:02:23.111689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:05.226 [2024-11-20 21:02:23.111697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:05.226 [2024-11-20 21:02:23.111709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:05.226 [2024-11-20 21:02:23.111717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:05.226 [2024-11-20 21:02:23.111728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:05.226 [2024-11-20 21:02:23.111737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:05.226 [2024-11-20 21:02:23.111766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:05.226 [2024-11-20 21:02:23.111775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.226 [2024-11-20 21:02:23.111787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:05.226 [2024-11-20 21:02:23.111794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:05.226 [2024-11-20 21:02:23.111805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.226 [2024-11-20 21:02:23.111813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:05.226 [2024-11-20 21:02:23.111827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:05.226 [2024-11-20 21:02:23.111837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.226 [2024-11-20 21:02:23.111847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:05.226 [2024-11-20 21:02:23.111855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:05.226 [2024-11-20 21:02:23.111867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.226 [2024-11-20 21:02:23.111876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:05.226 [2024-11-20 21:02:23.111886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:05.226 [2024-11-20 21:02:23.111894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.226 [2024-11-20 21:02:23.111904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:05.226 [2024-11-20 21:02:23.111913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:05.226 [2024-11-20 21:02:23.111924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:05.226 [2024-11-20 21:02:23.111932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:05.226 [2024-11-20 21:02:23.111942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:05.226 [2024-11-20 21:02:23.111950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:05.226 [2024-11-20 21:02:23.111961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:05.226 [2024-11-20 21:02:23.111969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:05.226 [2024-11-20 21:02:23.111981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:05.226 [2024-11-20 21:02:23.111990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:05.226 [2024-11-20 21:02:23.112000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:05.226 [2024-11-20 21:02:23.112006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.226 [2024-11-20 21:02:23.112018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:05.226 [2024-11-20 21:02:23.112025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:05.226 [2024-11-20 21:02:23.112036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.226 [2024-11-20 21:02:23.112043] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:05.226 [2024-11-20 21:02:23.112052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:05.226 [2024-11-20 21:02:23.112060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:05.226 [2024-11-20 21:02:23.112070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:05.226 [2024-11-20 21:02:23.112079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:05.226 [2024-11-20 21:02:23.112088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:05.226 [2024-11-20 21:02:23.112095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:05.226 [2024-11-20 21:02:23.112103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:05.226 [2024-11-20 21:02:23.112111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:05.226 [2024-11-20 21:02:23.112126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:05.226 [2024-11-20 21:02:23.112135] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:05.226 [2024-11-20 21:02:23.112147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:05.226 [2024-11-20 21:02:23.112157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:05.226 [2024-11-20 21:02:23.112167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:05.226 [2024-11-20 21:02:23.112175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:05.226 [2024-11-20 21:02:23.112186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:05.226 [2024-11-20 21:02:23.112194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:05.226 [2024-11-20 21:02:23.112203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:05.226 [2024-11-20 21:02:23.112210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:05.226 [2024-11-20 21:02:23.112220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:05.226 [2024-11-20 21:02:23.112228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:05.226 [2024-11-20 21:02:23.112237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:05.226 [2024-11-20 21:02:23.112246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:05.227 [2024-11-20 21:02:23.112256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:05.227 [2024-11-20 21:02:23.112263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:05.227 [2024-11-20 21:02:23.112284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:05.227 [2024-11-20 21:02:23.112292] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:05.227 [2024-11-20 21:02:23.112306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:05.227 [2024-11-20 21:02:23.112315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:05.227 [2024-11-20 21:02:23.112325] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:05.227 [2024-11-20 21:02:23.112332] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:05.227 [2024-11-20 21:02:23.112342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:05.227 [2024-11-20 21:02:23.112351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.112362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:05.227 [2024-11-20 21:02:23.112370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.831 ms 00:20:05.227 [2024-11-20 21:02:23.112379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.127400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.127452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:05.227 [2024-11-20 21:02:23.127465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.959 ms 00:20:05.227 [2024-11-20 21:02:23.127477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.127609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.127630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:05.227 [2024-11-20 21:02:23.127645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:05.227 [2024-11-20 21:02:23.127656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.140934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.140986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:05.227 [2024-11-20 21:02:23.141002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.256 ms 00:20:05.227 [2024-11-20 21:02:23.141016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.141086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.141098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:05.227 [2024-11-20 21:02:23.141108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:05.227 [2024-11-20 21:02:23.141120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.141666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.141723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:05.227 [2024-11-20 21:02:23.141735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:20:05.227 [2024-11-20 21:02:23.141796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.141954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.141978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:05.227 [2024-11-20 21:02:23.141988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:20:05.227 [2024-11-20 21:02:23.141999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.150903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.150955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:05.227 [2024-11-20 21:02:23.150965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.879 ms 00:20:05.227 [2024-11-20 21:02:23.150976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.155004] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:05.227 [2024-11-20 21:02:23.155060] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:05.227 [2024-11-20 21:02:23.155073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.155084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:05.227 [2024-11-20 21:02:23.155094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.979 ms 00:20:05.227 [2024-11-20 21:02:23.155104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.171235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.171295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:05.227 [2024-11-20 21:02:23.171307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.048 ms 00:20:05.227 [2024-11-20 21:02:23.171321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.174049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.174103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:05.227 [2024-11-20 21:02:23.174114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:20:05.227 [2024-11-20 21:02:23.174124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.176885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.176937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:05.227 [2024-11-20 21:02:23.176947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:20:05.227 [2024-11-20 21:02:23.176956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.177428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.177472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:05.227 [2024-11-20 21:02:23.177484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:05.227 [2024-11-20 21:02:23.177494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.218178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.218249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:05.227 [2024-11-20 21:02:23.218276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.656 ms 00:20:05.227 [2024-11-20 21:02:23.218291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.226549] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:05.227 [2024-11-20 21:02:23.246095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.246158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:05.227 [2024-11-20 21:02:23.246175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.686 ms 00:20:05.227 [2024-11-20 21:02:23.246183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.246302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.246314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:05.227 [2024-11-20 21:02:23.246336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:05.227 [2024-11-20 21:02:23.246345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.246418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.246432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:05.227 [2024-11-20 21:02:23.246443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:05.227 [2024-11-20 21:02:23.246450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.246488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.246496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:05.227 [2024-11-20 21:02:23.246510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:05.227 [2024-11-20 21:02:23.246522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.246562] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:05.227 [2024-11-20 21:02:23.246572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.246582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:05.227 [2024-11-20 21:02:23.246591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:05.227 [2024-11-20 21:02:23.246601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.252770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.252824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:05.227 [2024-11-20 21:02:23.252836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.143 ms 00:20:05.227 [2024-11-20 21:02:23.252850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.252943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.227 [2024-11-20 21:02:23.252956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:05.227 [2024-11-20 21:02:23.252964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:05.227 [2024-11-20 21:02:23.252981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.227 [2024-11-20 21:02:23.254169] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:05.227 [2024-11-20 21:02:23.255567] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.476 ms, result 0 00:20:05.227 [2024-11-20 21:02:23.257686] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:05.227 Some configs were skipped because the RPC state that can call them passed over. 00:20:05.227 21:02:23 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:05.490 [2024-11-20 21:02:23.494928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.490 [2024-11-20 21:02:23.494988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:05.490 [2024-11-20 21:02:23.495005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.719 ms 00:20:05.490 [2024-11-20 21:02:23.495015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.490 [2024-11-20 21:02:23.495053] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.855 ms, result 0 00:20:05.490 true 00:20:05.490 21:02:23 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:05.751 [2024-11-20 21:02:23.703045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:05.751 [2024-11-20 21:02:23.703106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:05.751 [2024-11-20 21:02:23.703119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.564 ms 00:20:05.751 [2024-11-20 21:02:23.703129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:05.751 [2024-11-20 21:02:23.703170] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.684 ms, result 0 00:20:05.751 true 00:20:05.751 21:02:23 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 87854 00:20:05.751 21:02:23 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87854 ']' 00:20:05.751 21:02:23 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87854 00:20:05.751 21:02:23 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:05.751 21:02:23 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:05.751 21:02:23 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87854 00:20:05.751 killing process with pid 87854 00:20:05.751 21:02:23 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:05.751 21:02:23 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:05.751 21:02:23 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87854' 00:20:05.751 21:02:23 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87854 00:20:05.751 21:02:23 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87854 00:20:06.019 [2024-11-20 21:02:23.883132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.019 [2024-11-20 21:02:23.883201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:06.019 [2024-11-20 21:02:23.883217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.019 [2024-11-20 21:02:23.883226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.019 [2024-11-20 21:02:23.883255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:06.019 [2024-11-20 21:02:23.883854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.019 [2024-11-20 21:02:23.883899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:06.019 [2024-11-20 21:02:23.883912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:20:06.020 [2024-11-20 21:02:23.883922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.884213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.020 [2024-11-20 21:02:23.884237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:06.020 [2024-11-20 21:02:23.884247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:20:06.020 [2024-11-20 21:02:23.884262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.888828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.020 [2024-11-20 21:02:23.888876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:06.020 [2024-11-20 21:02:23.888886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.546 ms 00:20:06.020 [2024-11-20 21:02:23.888896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.895854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.020 [2024-11-20 21:02:23.895897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:06.020 [2024-11-20 21:02:23.895908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.918 ms 00:20:06.020 [2024-11-20 21:02:23.895919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.898817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.020 [2024-11-20 21:02:23.898863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:06.020 [2024-11-20 21:02:23.898873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.820 ms 00:20:06.020 [2024-11-20 21:02:23.898882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.903267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.020 [2024-11-20 21:02:23.903315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:06.020 [2024-11-20 21:02:23.903326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.345 ms 00:20:06.020 [2024-11-20 21:02:23.903338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.903472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.020 [2024-11-20 21:02:23.903485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:06.020 [2024-11-20 21:02:23.903494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:06.020 [2024-11-20 21:02:23.903504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.906434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.020 [2024-11-20 21:02:23.906486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:06.020 [2024-11-20 21:02:23.906496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:20:06.020 [2024-11-20 21:02:23.906507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.908763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.020 [2024-11-20 21:02:23.908807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:06.020 [2024-11-20 21:02:23.908816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:20:06.020 [2024-11-20 21:02:23.908825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.910884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.020 [2024-11-20 21:02:23.910932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:06.020 [2024-11-20 21:02:23.910942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:20:06.020 [2024-11-20 21:02:23.910950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.913071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.020 [2024-11-20 21:02:23.913117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:06.020 [2024-11-20 21:02:23.913126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:20:06.020 [2024-11-20 21:02:23.913135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.020 [2024-11-20 21:02:23.913174] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:06.020 [2024-11-20 21:02:23.913191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:06.020 [2024-11-20 21:02:23.913688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.913995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:06.021 [2024-11-20 21:02:23.914145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:06.021 [2024-11-20 21:02:23.914154] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a 00:20:06.021 [2024-11-20 21:02:23.914165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:06.021 [2024-11-20 21:02:23.914176] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:06.021 [2024-11-20 21:02:23.914185] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:06.021 [2024-11-20 21:02:23.914195] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:06.021 [2024-11-20 21:02:23.914205] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:06.021 [2024-11-20 21:02:23.914213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:06.021 [2024-11-20 21:02:23.914226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:06.021 [2024-11-20 21:02:23.914232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:06.021 [2024-11-20 21:02:23.914242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:06.021 [2024-11-20 21:02:23.914249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.021 [2024-11-20 21:02:23.914271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:06.021 [2024-11-20 21:02:23.914281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.077 ms 00:20:06.021 [2024-11-20 21:02:23.914293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.021 [2024-11-20 21:02:23.916181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.021 [2024-11-20 21:02:23.916216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:06.021 [2024-11-20 21:02:23.916228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.867 ms 00:20:06.021 [2024-11-20 21:02:23.916238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.021 [2024-11-20 21:02:23.916368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.021 [2024-11-20 21:02:23.916382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:06.021 [2024-11-20 21:02:23.916392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:06.021 [2024-11-20 21:02:23.916404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.021 [2024-11-20 21:02:23.923093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.021 [2024-11-20 21:02:23.923139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.021 [2024-11-20 21:02:23.923150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.021 [2024-11-20 21:02:23.923160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.021 [2024-11-20 21:02:23.923232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.021 [2024-11-20 21:02:23.923243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.021 [2024-11-20 21:02:23.923252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.021 [2024-11-20 21:02:23.923264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.021 [2024-11-20 21:02:23.923310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.021 [2024-11-20 21:02:23.923323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.021 [2024-11-20 21:02:23.923331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.021 [2024-11-20 21:02:23.923340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.021 [2024-11-20 21:02:23.923363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.021 [2024-11-20 21:02:23.923375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.021 [2024-11-20 21:02:23.923383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.021 [2024-11-20 21:02:23.923393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.021 [2024-11-20 21:02:23.935553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.021 [2024-11-20 21:02:23.935610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.021 [2024-11-20 21:02:23.935621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.021 [2024-11-20 21:02:23.935632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.021 [2024-11-20 21:02:23.944589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.021 [2024-11-20 21:02:23.944653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.021 [2024-11-20 21:02:23.944665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.021 [2024-11-20 21:02:23.944677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.021 [2024-11-20 21:02:23.944722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.021 [2024-11-20 21:02:23.944740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.021 [2024-11-20 21:02:23.944763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.021 [2024-11-20 21:02:23.944774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.021 [2024-11-20 21:02:23.944807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.021 [2024-11-20 21:02:23.944818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.021 [2024-11-20 21:02:23.944826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.021 [2024-11-20 21:02:23.944836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.022 [2024-11-20 21:02:23.944904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.022 [2024-11-20 21:02:23.944918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.022 [2024-11-20 21:02:23.944929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.022 [2024-11-20 21:02:23.944939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.022 [2024-11-20 21:02:23.944971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.022 [2024-11-20 21:02:23.944983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:06.022 [2024-11-20 21:02:23.944991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.022 [2024-11-20 21:02:23.945003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.022 [2024-11-20 21:02:23.945043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.022 [2024-11-20 21:02:23.945055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.022 [2024-11-20 21:02:23.945066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.022 [2024-11-20 21:02:23.945078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.022 [2024-11-20 21:02:23.945125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:06.022 [2024-11-20 21:02:23.945138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.022 [2024-11-20 21:02:23.945147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:06.022 [2024-11-20 21:02:23.945158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.022 [2024-11-20 21:02:23.945306] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.141 ms, result 0 00:20:06.347 21:02:24 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:06.347 [2024-11-20 21:02:24.216554] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:20:06.347 [2024-11-20 21:02:24.216962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87890 ] 00:20:06.347 [2024-11-20 21:02:24.362635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.347 [2024-11-20 21:02:24.392400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.615 [2024-11-20 21:02:24.508293] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.615 [2024-11-20 21:02:24.508385] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:06.615 [2024-11-20 21:02:24.670025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.615 [2024-11-20 21:02:24.670086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:06.615 [2024-11-20 21:02:24.670101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:06.615 [2024-11-20 21:02:24.670110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.615 [2024-11-20 21:02:24.672664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.615 [2024-11-20 21:02:24.672718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:06.615 [2024-11-20 21:02:24.672730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:20:06.615 [2024-11-20 21:02:24.672759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.615 [2024-11-20 21:02:24.672862] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:06.615 [2024-11-20 21:02:24.673126] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:06.615 [2024-11-20 21:02:24.673143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.615 [2024-11-20 21:02:24.673154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:06.615 [2024-11-20 21:02:24.673165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:06.615 [2024-11-20 21:02:24.673173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.615 [2024-11-20 21:02:24.675538] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:06.615 [2024-11-20 21:02:24.679344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.615 [2024-11-20 21:02:24.679401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:06.616 [2024-11-20 21:02:24.679419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.809 ms 00:20:06.616 [2024-11-20 21:02:24.679428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.616 [2024-11-20 21:02:24.679509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.616 [2024-11-20 21:02:24.679521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:06.616 [2024-11-20 21:02:24.679531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:06.616 [2024-11-20 21:02:24.679539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.616 [2024-11-20 21:02:24.687604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.616 [2024-11-20 21:02:24.687648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:06.616 [2024-11-20 21:02:24.687659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.015 ms 00:20:06.616 [2024-11-20 21:02:24.687667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.616 [2024-11-20 21:02:24.687833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.616 [2024-11-20 21:02:24.687848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:06.616 [2024-11-20 21:02:24.687857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:06.616 [2024-11-20 21:02:24.687869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.616 [2024-11-20 21:02:24.687902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.616 [2024-11-20 21:02:24.687912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:06.616 [2024-11-20 21:02:24.687927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:06.616 [2024-11-20 21:02:24.687936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.616 [2024-11-20 21:02:24.687964] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:06.616 [2024-11-20 21:02:24.689989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.616 [2024-11-20 21:02:24.690024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:06.616 [2024-11-20 21:02:24.690035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:20:06.616 [2024-11-20 21:02:24.690047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.616 [2024-11-20 21:02:24.690095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.616 [2024-11-20 21:02:24.690104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:06.616 [2024-11-20 21:02:24.690113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:06.616 [2024-11-20 21:02:24.690121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.616 [2024-11-20 21:02:24.690139] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:06.616 [2024-11-20 21:02:24.690164] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:06.616 [2024-11-20 21:02:24.690205] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:06.616 [2024-11-20 21:02:24.690225] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:06.616 [2024-11-20 21:02:24.690360] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:06.616 [2024-11-20 21:02:24.690378] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:06.616 [2024-11-20 21:02:24.690388] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:06.616 [2024-11-20 21:02:24.690398] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:06.616 [2024-11-20 21:02:24.690408] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:06.616 [2024-11-20 21:02:24.690417] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:06.616 [2024-11-20 21:02:24.690429] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:06.616 [2024-11-20 21:02:24.690437] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:06.616 [2024-11-20 21:02:24.690447] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:06.616 [2024-11-20 21:02:24.690458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.616 [2024-11-20 21:02:24.690469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:06.616 [2024-11-20 21:02:24.690478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:20:06.616 [2024-11-20 21:02:24.690486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.616 [2024-11-20 21:02:24.690576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.616 [2024-11-20 21:02:24.690587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:06.616 [2024-11-20 21:02:24.690602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:06.616 [2024-11-20 21:02:24.690611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.616 [2024-11-20 21:02:24.690720] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:06.616 [2024-11-20 21:02:24.690732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:06.616 [2024-11-20 21:02:24.690764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.616 [2024-11-20 21:02:24.690774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.616 [2024-11-20 21:02:24.690782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:06.616 [2024-11-20 21:02:24.690790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:06.616 [2024-11-20 21:02:24.690797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:06.616 [2024-11-20 21:02:24.690806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:06.616 [2024-11-20 21:02:24.690814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:06.616 [2024-11-20 21:02:24.690822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.616 [2024-11-20 21:02:24.690829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:06.616 [2024-11-20 21:02:24.690836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:06.616 [2024-11-20 21:02:24.690843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:06.616 [2024-11-20 21:02:24.690850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:06.616 [2024-11-20 21:02:24.690857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:06.616 [2024-11-20 21:02:24.690865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.616 [2024-11-20 21:02:24.690875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:06.616 [2024-11-20 21:02:24.690885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:06.616 [2024-11-20 21:02:24.690892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.616 [2024-11-20 21:02:24.690899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:06.616 [2024-11-20 21:02:24.690907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:06.616 [2024-11-20 21:02:24.690914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.616 [2024-11-20 21:02:24.690922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:06.616 [2024-11-20 21:02:24.690934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:06.616 [2024-11-20 21:02:24.690941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.616 [2024-11-20 21:02:24.690948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:06.616 [2024-11-20 21:02:24.690955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:06.616 [2024-11-20 21:02:24.690962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.616 [2024-11-20 21:02:24.690970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:06.616 [2024-11-20 21:02:24.690977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:06.616 [2024-11-20 21:02:24.690984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:06.616 [2024-11-20 21:02:24.690990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:06.616 [2024-11-20 21:02:24.690997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:06.616 [2024-11-20 21:02:24.691006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.616 [2024-11-20 21:02:24.691013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:06.616 [2024-11-20 21:02:24.691020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:06.616 [2024-11-20 21:02:24.691026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:06.616 [2024-11-20 21:02:24.691033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:06.616 [2024-11-20 21:02:24.691039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:06.616 [2024-11-20 21:02:24.691049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.616 [2024-11-20 21:02:24.691056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:06.616 [2024-11-20 21:02:24.691062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:06.616 [2024-11-20 21:02:24.691069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.616 [2024-11-20 21:02:24.691075] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:06.616 [2024-11-20 21:02:24.691082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:06.616 [2024-11-20 21:02:24.691095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:06.616 [2024-11-20 21:02:24.691105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:06.616 [2024-11-20 21:02:24.691113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:06.616 [2024-11-20 21:02:24.691120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:06.616 [2024-11-20 21:02:24.691127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:06.616 [2024-11-20 21:02:24.691134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:06.616 [2024-11-20 21:02:24.691142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:06.616 [2024-11-20 21:02:24.691148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:06.616 [2024-11-20 21:02:24.691157] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:06.616 [2024-11-20 21:02:24.691166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.617 [2024-11-20 21:02:24.691176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:06.617 [2024-11-20 21:02:24.691185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:06.617 [2024-11-20 21:02:24.691192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:06.617 [2024-11-20 21:02:24.691200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:06.617 [2024-11-20 21:02:24.691207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:06.617 [2024-11-20 21:02:24.691214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:06.617 [2024-11-20 21:02:24.691221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:06.617 [2024-11-20 21:02:24.691228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:06.617 [2024-11-20 21:02:24.691236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:06.617 [2024-11-20 21:02:24.691248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:06.617 [2024-11-20 21:02:24.691255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:06.617 [2024-11-20 21:02:24.691261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:06.617 [2024-11-20 21:02:24.691270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:06.617 [2024-11-20 21:02:24.691277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:06.617 [2024-11-20 21:02:24.691284] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:06.617 [2024-11-20 21:02:24.691292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:06.617 [2024-11-20 21:02:24.691307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:06.617 [2024-11-20 21:02:24.691314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:06.617 [2024-11-20 21:02:24.691322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:06.617 [2024-11-20 21:02:24.691329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:06.617 [2024-11-20 21:02:24.691336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.617 [2024-11-20 21:02:24.691345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:06.617 [2024-11-20 21:02:24.691355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:20:06.617 [2024-11-20 21:02:24.691362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.617 [2024-11-20 21:02:24.705379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.617 [2024-11-20 21:02:24.705431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.617 [2024-11-20 21:02:24.705443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.965 ms 00:20:06.617 [2024-11-20 21:02:24.705452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.617 [2024-11-20 21:02:24.705582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.617 [2024-11-20 21:02:24.705594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.617 [2024-11-20 21:02:24.705610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:06.617 [2024-11-20 21:02:24.705618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.731187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.731278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.879 [2024-11-20 21:02:24.731305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.540 ms 00:20:06.879 [2024-11-20 21:02:24.731323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.731502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.731535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.879 [2024-11-20 21:02:24.731568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:06.879 [2024-11-20 21:02:24.731589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.732262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.732319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.879 [2024-11-20 21:02:24.732340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:20:06.879 [2024-11-20 21:02:24.732358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.732643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.732664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.879 [2024-11-20 21:02:24.732687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:20:06.879 [2024-11-20 21:02:24.732704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.741708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.741791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.879 [2024-11-20 21:02:24.741810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.915 ms 00:20:06.879 [2024-11-20 21:02:24.741819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.745737] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:06.879 [2024-11-20 21:02:24.745822] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:06.879 [2024-11-20 21:02:24.745835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.745843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:06.879 [2024-11-20 21:02:24.745853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.898 ms 00:20:06.879 [2024-11-20 21:02:24.745861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.761813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.761864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:06.879 [2024-11-20 21:02:24.761877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.873 ms 00:20:06.879 [2024-11-20 21:02:24.761886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.765174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.765225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:06.879 [2024-11-20 21:02:24.765235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.158 ms 00:20:06.879 [2024-11-20 21:02:24.765243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.767895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.767943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:06.879 [2024-11-20 21:02:24.767954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:20:06.879 [2024-11-20 21:02:24.767962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.768307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.768324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.879 [2024-11-20 21:02:24.768340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:06.879 [2024-11-20 21:02:24.768348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.794014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.794073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:06.879 [2024-11-20 21:02:24.794087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.643 ms 00:20:06.879 [2024-11-20 21:02:24.794096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.802364] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:06.879 [2024-11-20 21:02:24.822101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.822156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:06.879 [2024-11-20 21:02:24.822169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.907 ms 00:20:06.879 [2024-11-20 21:02:24.822188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.822293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.822305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:06.879 [2024-11-20 21:02:24.822315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:06.879 [2024-11-20 21:02:24.822328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.879 [2024-11-20 21:02:24.822391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.879 [2024-11-20 21:02:24.822408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:06.880 [2024-11-20 21:02:24.822417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:06.880 [2024-11-20 21:02:24.822429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.880 [2024-11-20 21:02:24.822454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.880 [2024-11-20 21:02:24.822463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:06.880 [2024-11-20 21:02:24.822472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:06.880 [2024-11-20 21:02:24.822480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.880 [2024-11-20 21:02:24.822518] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:06.880 [2024-11-20 21:02:24.822529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.880 [2024-11-20 21:02:24.822538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:06.880 [2024-11-20 21:02:24.822545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:06.880 [2024-11-20 21:02:24.822554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.880 [2024-11-20 21:02:24.828610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.880 [2024-11-20 21:02:24.828662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:06.880 [2024-11-20 21:02:24.828674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.034 ms 00:20:06.880 [2024-11-20 21:02:24.828682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.880 [2024-11-20 21:02:24.828809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.880 [2024-11-20 21:02:24.828820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:06.880 [2024-11-20 21:02:24.828830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:06.880 [2024-11-20 21:02:24.828840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.880 [2024-11-20 21:02:24.830193] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:06.880 [2024-11-20 21:02:24.831603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 159.850 ms, result 0 00:20:06.880 [2024-11-20 21:02:24.832862] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:06.880 [2024-11-20 21:02:24.840287] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:07.825  [2024-11-20T21:02:26.889Z] Copying: 18/256 [MB] (18 MBps) [2024-11-20T21:02:28.275Z] Copying: 39/256 [MB] (21 MBps) [2024-11-20T21:02:29.220Z] Copying: 52/256 [MB] (12 MBps) [2024-11-20T21:02:30.163Z] Copying: 73/256 [MB] (20 MBps) [2024-11-20T21:02:31.107Z] Copying: 85/256 [MB] (12 MBps) [2024-11-20T21:02:32.053Z] Copying: 95/256 [MB] (10 MBps) [2024-11-20T21:02:32.998Z] Copying: 106/256 [MB] (10 MBps) [2024-11-20T21:02:33.944Z] Copying: 121/256 [MB] (15 MBps) [2024-11-20T21:02:34.889Z] Copying: 137/256 [MB] (15 MBps) [2024-11-20T21:02:36.276Z] Copying: 148/256 [MB] (10 MBps) [2024-11-20T21:02:37.221Z] Copying: 161/256 [MB] (13 MBps) [2024-11-20T21:02:38.167Z] Copying: 174/256 [MB] (13 MBps) [2024-11-20T21:02:39.112Z] Copying: 189/256 [MB] (14 MBps) [2024-11-20T21:02:40.057Z] Copying: 213/256 [MB] (24 MBps) [2024-11-20T21:02:41.001Z] Copying: 224/256 [MB] (11 MBps) [2024-11-20T21:02:41.945Z] Copying: 241/256 [MB] (16 MBps) [2024-11-20T21:02:42.521Z] Copying: 251/256 [MB] (10 MBps) [2024-11-20T21:02:42.521Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-20 21:02:42.433212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:24.402 [2024-11-20 21:02:42.435231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.435301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:24.402 [2024-11-20 21:02:42.435317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:24.402 [2024-11-20 21:02:42.435327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.435355] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:24.402 [2024-11-20 21:02:42.436154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.436202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:24.402 [2024-11-20 21:02:42.436217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:20:24.402 [2024-11-20 21:02:42.436228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.436551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.436564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:24.402 [2024-11-20 21:02:42.436574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:20:24.402 [2024-11-20 21:02:42.436590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.440830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.440861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:24.402 [2024-11-20 21:02:42.440872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:20:24.402 [2024-11-20 21:02:42.440881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.449199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.449256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:24.402 [2024-11-20 21:02:42.449268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.296 ms 00:20:24.402 [2024-11-20 21:02:42.449277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.453019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.453074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:24.402 [2024-11-20 21:02:42.453085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:20:24.402 [2024-11-20 21:02:42.453094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.458175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.458235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:24.402 [2024-11-20 21:02:42.458246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.031 ms 00:20:24.402 [2024-11-20 21:02:42.458268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.458406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.458426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:24.402 [2024-11-20 21:02:42.458436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:24.402 [2024-11-20 21:02:42.458444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.461933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.461981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:24.402 [2024-11-20 21:02:42.461991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.466 ms 00:20:24.402 [2024-11-20 21:02:42.461998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.465034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.465086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:24.402 [2024-11-20 21:02:42.465097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.990 ms 00:20:24.402 [2024-11-20 21:02:42.465104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.467481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.467533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:24.402 [2024-11-20 21:02:42.467543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:20:24.402 [2024-11-20 21:02:42.467550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.469786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.402 [2024-11-20 21:02:42.469831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:24.402 [2024-11-20 21:02:42.469841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:20:24.402 [2024-11-20 21:02:42.469848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.402 [2024-11-20 21:02:42.469892] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:24.402 [2024-11-20 21:02:42.469908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.469919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.469928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.469936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.469944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.469953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.469962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.469970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.469978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.469986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.469995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:24.402 [2024-11-20 21:02:42.470145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:24.403 [2024-11-20 21:02:42.470770] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:24.403 [2024-11-20 21:02:42.470780] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 901b289f-d2d0-45a3-b4d3-7f72fc1e8a6a 00:20:24.403 [2024-11-20 21:02:42.470788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:24.403 [2024-11-20 21:02:42.470797] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:24.403 [2024-11-20 21:02:42.470805] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:24.403 [2024-11-20 21:02:42.470814] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:24.403 [2024-11-20 21:02:42.470823] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:24.403 [2024-11-20 21:02:42.470831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:24.403 [2024-11-20 21:02:42.470845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:24.403 [2024-11-20 21:02:42.470852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:24.403 [2024-11-20 21:02:42.470858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:24.403 [2024-11-20 21:02:42.470866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.403 [2024-11-20 21:02:42.470894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:24.403 [2024-11-20 21:02:42.470903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.977 ms 00:20:24.403 [2024-11-20 21:02:42.470911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.403 [2024-11-20 21:02:42.473198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.403 [2024-11-20 21:02:42.473237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:24.403 [2024-11-20 21:02:42.473248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.266 ms 00:20:24.403 [2024-11-20 21:02:42.473260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.403 [2024-11-20 21:02:42.473391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.403 [2024-11-20 21:02:42.473401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:24.403 [2024-11-20 21:02:42.473410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:24.403 [2024-11-20 21:02:42.473418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.403 [2024-11-20 21:02:42.481235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.481289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.404 [2024-11-20 21:02:42.481301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.481309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.481406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.481416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.404 [2024-11-20 21:02:42.481425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.481433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.481487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.481498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.404 [2024-11-20 21:02:42.481506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.481514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.481534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.481547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.404 [2024-11-20 21:02:42.481556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.481564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.494698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.494764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.404 [2024-11-20 21:02:42.494775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.494784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.504655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.504714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.404 [2024-11-20 21:02:42.504726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.504734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.504849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.504860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.404 [2024-11-20 21:02:42.504870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.504879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.504913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.504921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.404 [2024-11-20 21:02:42.504935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.504943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.505015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.505025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.404 [2024-11-20 21:02:42.505034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.505042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.505081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.505091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:24.404 [2024-11-20 21:02:42.505106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.505113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.505155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.505166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.404 [2024-11-20 21:02:42.505175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.505186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.505231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.404 [2024-11-20 21:02:42.505243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.404 [2024-11-20 21:02:42.505255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.404 [2024-11-20 21:02:42.505266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.404 [2024-11-20 21:02:42.505421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 70.162 ms, result 0 00:20:24.665 00:20:24.665 00:20:24.665 21:02:42 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:25.236 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:25.236 21:02:43 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:25.236 21:02:43 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:25.236 21:02:43 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:25.236 21:02:43 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:25.237 21:02:43 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:25.237 21:02:43 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:25.499 Process with pid 87854 is not found 00:20:25.499 21:02:43 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 87854 00:20:25.499 21:02:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87854 ']' 00:20:25.499 21:02:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87854 00:20:25.499 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (87854) - No such process 00:20:25.499 21:02:43 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 87854 is not found' 00:20:25.499 ************************************ 00:20:25.499 END TEST ftl_trim 00:20:25.499 ************************************ 00:20:25.499 00:20:25.499 real 1m4.689s 00:20:25.499 user 1m25.568s 00:20:25.499 sys 0m5.637s 00:20:25.499 21:02:43 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.499 21:02:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:25.499 21:02:43 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:25.499 21:02:43 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:25.499 21:02:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.499 21:02:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:25.499 ************************************ 00:20:25.499 START TEST ftl_restore 00:20:25.499 ************************************ 00:20:25.499 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:25.499 * Looking for test storage... 00:20:25.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:25.499 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:25.499 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:25.499 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:25.499 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:25.499 21:02:43 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.500 21:02:43 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:25.500 21:02:43 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.500 21:02:43 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.500 21:02:43 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.500 21:02:43 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:25.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.500 --rc genhtml_branch_coverage=1 00:20:25.500 --rc genhtml_function_coverage=1 00:20:25.500 --rc genhtml_legend=1 00:20:25.500 --rc geninfo_all_blocks=1 00:20:25.500 --rc geninfo_unexecuted_blocks=1 00:20:25.500 00:20:25.500 ' 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:25.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.500 --rc genhtml_branch_coverage=1 00:20:25.500 --rc genhtml_function_coverage=1 00:20:25.500 --rc genhtml_legend=1 00:20:25.500 --rc geninfo_all_blocks=1 00:20:25.500 --rc geninfo_unexecuted_blocks=1 00:20:25.500 00:20:25.500 ' 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:25.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.500 --rc genhtml_branch_coverage=1 00:20:25.500 --rc genhtml_function_coverage=1 00:20:25.500 --rc genhtml_legend=1 00:20:25.500 --rc geninfo_all_blocks=1 00:20:25.500 --rc geninfo_unexecuted_blocks=1 00:20:25.500 00:20:25.500 ' 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:25.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.500 --rc genhtml_branch_coverage=1 00:20:25.500 --rc genhtml_function_coverage=1 00:20:25.500 --rc genhtml_legend=1 00:20:25.500 --rc geninfo_all_blocks=1 00:20:25.500 --rc geninfo_unexecuted_blocks=1 00:20:25.500 00:20:25.500 ' 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.KcE9aEsKb6 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88159 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88159 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88159 ']' 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.500 21:02:43 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:25.500 21:02:43 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:25.762 [2024-11-20 21:02:43.691223] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:20:25.762 [2024-11-20 21:02:43.691382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88159 ] 00:20:25.762 [2024-11-20 21:02:43.838442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.762 [2024-11-20 21:02:43.869357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.704 21:02:44 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.704 21:02:44 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:26.704 21:02:44 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:26.704 21:02:44 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:26.704 21:02:44 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:26.704 21:02:44 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:26.704 21:02:44 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:26.704 21:02:44 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:26.965 21:02:44 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:26.965 21:02:44 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:26.965 21:02:44 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:26.965 21:02:44 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:26.965 21:02:44 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:26.965 21:02:44 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:26.965 21:02:44 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:26.965 21:02:44 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:26.965 21:02:45 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:26.965 { 00:20:26.965 "name": "nvme0n1", 00:20:26.965 "aliases": [ 00:20:26.965 "e63551c3-c6ee-40a1-b1f1-192954125c7e" 00:20:26.965 ], 00:20:26.965 "product_name": "NVMe disk", 00:20:26.965 "block_size": 4096, 00:20:26.965 "num_blocks": 1310720, 00:20:26.965 "uuid": "e63551c3-c6ee-40a1-b1f1-192954125c7e", 00:20:26.965 "numa_id": -1, 00:20:26.965 "assigned_rate_limits": { 00:20:26.965 "rw_ios_per_sec": 0, 00:20:26.965 "rw_mbytes_per_sec": 0, 00:20:26.965 "r_mbytes_per_sec": 0, 00:20:26.965 "w_mbytes_per_sec": 0 00:20:26.965 }, 00:20:26.965 "claimed": true, 00:20:26.965 "claim_type": "read_many_write_one", 00:20:26.965 "zoned": false, 00:20:26.965 "supported_io_types": { 00:20:26.965 "read": true, 00:20:26.965 "write": true, 00:20:26.965 "unmap": true, 00:20:26.965 "flush": true, 00:20:26.965 "reset": true, 00:20:26.965 "nvme_admin": true, 00:20:26.965 "nvme_io": true, 00:20:26.965 "nvme_io_md": false, 00:20:26.965 "write_zeroes": true, 00:20:26.965 "zcopy": false, 00:20:26.965 "get_zone_info": false, 00:20:26.965 "zone_management": false, 00:20:26.965 "zone_append": false, 00:20:26.965 "compare": true, 00:20:26.965 "compare_and_write": false, 00:20:26.965 "abort": true, 00:20:26.965 "seek_hole": false, 00:20:26.965 "seek_data": false, 00:20:26.965 "copy": true, 00:20:26.965 "nvme_iov_md": false 00:20:26.965 }, 00:20:26.965 "driver_specific": { 00:20:26.965 "nvme": [ 00:20:26.965 { 00:20:26.965 "pci_address": "0000:00:11.0", 00:20:26.965 "trid": { 00:20:26.965 "trtype": "PCIe", 00:20:26.965 "traddr": "0000:00:11.0" 00:20:26.965 }, 00:20:26.965 "ctrlr_data": { 00:20:26.965 "cntlid": 0, 00:20:26.965 "vendor_id": "0x1b36", 00:20:26.965 "model_number": "QEMU NVMe Ctrl", 00:20:26.965 "serial_number": "12341", 00:20:26.965 "firmware_revision": "8.0.0", 00:20:26.965 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:26.965 "oacs": { 00:20:26.965 "security": 0, 00:20:26.965 "format": 1, 00:20:26.965 "firmware": 0, 00:20:26.966 "ns_manage": 1 00:20:26.966 }, 00:20:26.966 "multi_ctrlr": false, 00:20:26.966 "ana_reporting": false 00:20:26.966 }, 00:20:26.966 "vs": { 00:20:26.966 "nvme_version": "1.4" 00:20:26.966 }, 00:20:26.966 "ns_data": { 00:20:26.966 "id": 1, 00:20:26.966 "can_share": false 00:20:26.966 } 00:20:26.966 } 00:20:26.966 ], 00:20:26.966 "mp_policy": "active_passive" 00:20:26.966 } 00:20:26.966 } 00:20:26.966 ]' 00:20:26.966 21:02:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:27.227 21:02:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:27.227 21:02:45 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:27.227 21:02:45 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:27.227 21:02:45 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:27.227 21:02:45 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:27.227 21:02:45 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:27.227 21:02:45 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:27.227 21:02:45 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:27.227 21:02:45 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:27.227 21:02:45 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:27.488 21:02:45 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a4d13566-26f7-4efb-8ec4-fa35e95f21aa 00:20:27.488 21:02:45 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:27.488 21:02:45 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a4d13566-26f7-4efb-8ec4-fa35e95f21aa 00:20:27.488 21:02:45 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:27.749 21:02:45 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=db98ae6b-6895-4380-97c7-faf115da5d71 00:20:27.749 21:02:45 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u db98ae6b-6895-4380-97c7-faf115da5d71 00:20:28.010 21:02:46 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:28.010 21:02:46 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:28.010 21:02:46 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:28.010 21:02:46 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:28.010 21:02:46 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:28.010 21:02:46 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:28.010 21:02:46 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:28.010 21:02:46 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:28.010 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:28.010 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:28.010 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:28.010 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:28.010 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:28.272 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:28.272 { 00:20:28.272 "name": "a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d", 00:20:28.272 "aliases": [ 00:20:28.272 "lvs/nvme0n1p0" 00:20:28.272 ], 00:20:28.272 "product_name": "Logical Volume", 00:20:28.272 "block_size": 4096, 00:20:28.272 "num_blocks": 26476544, 00:20:28.272 "uuid": "a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d", 00:20:28.272 "assigned_rate_limits": { 00:20:28.272 "rw_ios_per_sec": 0, 00:20:28.272 "rw_mbytes_per_sec": 0, 00:20:28.272 "r_mbytes_per_sec": 0, 00:20:28.272 "w_mbytes_per_sec": 0 00:20:28.272 }, 00:20:28.272 "claimed": false, 00:20:28.272 "zoned": false, 00:20:28.272 "supported_io_types": { 00:20:28.272 "read": true, 00:20:28.272 "write": true, 00:20:28.272 "unmap": true, 00:20:28.272 "flush": false, 00:20:28.272 "reset": true, 00:20:28.272 "nvme_admin": false, 00:20:28.272 "nvme_io": false, 00:20:28.272 "nvme_io_md": false, 00:20:28.272 "write_zeroes": true, 00:20:28.272 "zcopy": false, 00:20:28.272 "get_zone_info": false, 00:20:28.272 "zone_management": false, 00:20:28.272 "zone_append": false, 00:20:28.272 "compare": false, 00:20:28.272 "compare_and_write": false, 00:20:28.272 "abort": false, 00:20:28.272 "seek_hole": true, 00:20:28.272 "seek_data": true, 00:20:28.272 "copy": false, 00:20:28.272 "nvme_iov_md": false 00:20:28.272 }, 00:20:28.272 "driver_specific": { 00:20:28.272 "lvol": { 00:20:28.272 "lvol_store_uuid": "db98ae6b-6895-4380-97c7-faf115da5d71", 00:20:28.272 "base_bdev": "nvme0n1", 00:20:28.272 "thin_provision": true, 00:20:28.272 "num_allocated_clusters": 0, 00:20:28.272 "snapshot": false, 00:20:28.272 "clone": false, 00:20:28.272 "esnap_clone": false 00:20:28.272 } 00:20:28.272 } 00:20:28.272 } 00:20:28.272 ]' 00:20:28.272 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:28.272 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:28.272 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:28.272 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:28.272 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:28.272 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:28.272 21:02:46 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:28.272 21:02:46 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:28.272 21:02:46 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:28.534 21:02:46 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:28.534 21:02:46 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:28.534 21:02:46 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:28.534 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:28.534 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:28.534 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:28.534 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:28.534 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:28.795 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:28.795 { 00:20:28.795 "name": "a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d", 00:20:28.795 "aliases": [ 00:20:28.795 "lvs/nvme0n1p0" 00:20:28.795 ], 00:20:28.795 "product_name": "Logical Volume", 00:20:28.795 "block_size": 4096, 00:20:28.795 "num_blocks": 26476544, 00:20:28.795 "uuid": "a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d", 00:20:28.795 "assigned_rate_limits": { 00:20:28.795 "rw_ios_per_sec": 0, 00:20:28.795 "rw_mbytes_per_sec": 0, 00:20:28.795 "r_mbytes_per_sec": 0, 00:20:28.795 "w_mbytes_per_sec": 0 00:20:28.795 }, 00:20:28.795 "claimed": false, 00:20:28.795 "zoned": false, 00:20:28.795 "supported_io_types": { 00:20:28.795 "read": true, 00:20:28.795 "write": true, 00:20:28.795 "unmap": true, 00:20:28.795 "flush": false, 00:20:28.795 "reset": true, 00:20:28.795 "nvme_admin": false, 00:20:28.795 "nvme_io": false, 00:20:28.795 "nvme_io_md": false, 00:20:28.795 "write_zeroes": true, 00:20:28.795 "zcopy": false, 00:20:28.795 "get_zone_info": false, 00:20:28.795 "zone_management": false, 00:20:28.795 "zone_append": false, 00:20:28.795 "compare": false, 00:20:28.795 "compare_and_write": false, 00:20:28.795 "abort": false, 00:20:28.795 "seek_hole": true, 00:20:28.796 "seek_data": true, 00:20:28.796 "copy": false, 00:20:28.796 "nvme_iov_md": false 00:20:28.796 }, 00:20:28.796 "driver_specific": { 00:20:28.796 "lvol": { 00:20:28.796 "lvol_store_uuid": "db98ae6b-6895-4380-97c7-faf115da5d71", 00:20:28.796 "base_bdev": "nvme0n1", 00:20:28.796 "thin_provision": true, 00:20:28.796 "num_allocated_clusters": 0, 00:20:28.796 "snapshot": false, 00:20:28.796 "clone": false, 00:20:28.796 "esnap_clone": false 00:20:28.796 } 00:20:28.796 } 00:20:28.796 } 00:20:28.796 ]' 00:20:28.796 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:28.796 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:28.796 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:28.796 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:28.796 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:28.796 21:02:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:28.796 21:02:46 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:28.796 21:02:46 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:29.057 21:02:47 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:29.057 21:02:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:29.057 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:29.057 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:29.057 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:29.057 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:29.057 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d 00:20:29.319 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:29.319 { 00:20:29.319 "name": "a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d", 00:20:29.319 "aliases": [ 00:20:29.319 "lvs/nvme0n1p0" 00:20:29.319 ], 00:20:29.319 "product_name": "Logical Volume", 00:20:29.319 "block_size": 4096, 00:20:29.319 "num_blocks": 26476544, 00:20:29.319 "uuid": "a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d", 00:20:29.319 "assigned_rate_limits": { 00:20:29.319 "rw_ios_per_sec": 0, 00:20:29.319 "rw_mbytes_per_sec": 0, 00:20:29.319 "r_mbytes_per_sec": 0, 00:20:29.319 "w_mbytes_per_sec": 0 00:20:29.319 }, 00:20:29.319 "claimed": false, 00:20:29.319 "zoned": false, 00:20:29.319 "supported_io_types": { 00:20:29.319 "read": true, 00:20:29.319 "write": true, 00:20:29.319 "unmap": true, 00:20:29.319 "flush": false, 00:20:29.319 "reset": true, 00:20:29.319 "nvme_admin": false, 00:20:29.319 "nvme_io": false, 00:20:29.319 "nvme_io_md": false, 00:20:29.319 "write_zeroes": true, 00:20:29.319 "zcopy": false, 00:20:29.319 "get_zone_info": false, 00:20:29.319 "zone_management": false, 00:20:29.319 "zone_append": false, 00:20:29.319 "compare": false, 00:20:29.319 "compare_and_write": false, 00:20:29.319 "abort": false, 00:20:29.319 "seek_hole": true, 00:20:29.319 "seek_data": true, 00:20:29.319 "copy": false, 00:20:29.319 "nvme_iov_md": false 00:20:29.319 }, 00:20:29.319 "driver_specific": { 00:20:29.319 "lvol": { 00:20:29.319 "lvol_store_uuid": "db98ae6b-6895-4380-97c7-faf115da5d71", 00:20:29.319 "base_bdev": "nvme0n1", 00:20:29.319 "thin_provision": true, 00:20:29.319 "num_allocated_clusters": 0, 00:20:29.319 "snapshot": false, 00:20:29.319 "clone": false, 00:20:29.319 "esnap_clone": false 00:20:29.319 } 00:20:29.319 } 00:20:29.319 } 00:20:29.319 ]' 00:20:29.319 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:29.319 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:29.319 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:29.319 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:29.319 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:29.319 21:02:47 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:29.319 21:02:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:29.319 21:02:47 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d --l2p_dram_limit 10' 00:20:29.319 21:02:47 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:29.319 21:02:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:29.319 21:02:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:29.319 21:02:47 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:29.319 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:29.319 21:02:47 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a58ffc26-6dcb-43b3-9c6f-8d92a751ba1d --l2p_dram_limit 10 -c nvc0n1p0 00:20:29.581 [2024-11-20 21:02:47.532472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.581 [2024-11-20 21:02:47.532598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:29.581 [2024-11-20 21:02:47.532614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:29.581 [2024-11-20 21:02:47.532622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.581 [2024-11-20 21:02:47.532665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.581 [2024-11-20 21:02:47.532673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.581 [2024-11-20 21:02:47.532682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:29.581 [2024-11-20 21:02:47.532690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.581 [2024-11-20 21:02:47.532705] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:29.581 [2024-11-20 21:02:47.532911] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:29.581 [2024-11-20 21:02:47.532925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.581 [2024-11-20 21:02:47.532932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.581 [2024-11-20 21:02:47.532940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:20:29.581 [2024-11-20 21:02:47.532948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.581 [2024-11-20 21:02:47.532972] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d39077d7-7682-4ed0-b438-a9982d49a5b0 00:20:29.581 [2024-11-20 21:02:47.533910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.581 [2024-11-20 21:02:47.533930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:29.581 [2024-11-20 21:02:47.533941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:29.581 [2024-11-20 21:02:47.533947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.581 [2024-11-20 21:02:47.538610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.581 [2024-11-20 21:02:47.538641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.581 [2024-11-20 21:02:47.538651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.606 ms 00:20:29.582 [2024-11-20 21:02:47.538657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-11-20 21:02:47.538716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-11-20 21:02:47.538724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.582 [2024-11-20 21:02:47.538732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:29.582 [2024-11-20 21:02:47.538739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-11-20 21:02:47.538791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-11-20 21:02:47.538799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:29.582 [2024-11-20 21:02:47.538807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:29.582 [2024-11-20 21:02:47.538813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-11-20 21:02:47.538830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.582 [2024-11-20 21:02:47.540169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-11-20 21:02:47.540238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.582 [2024-11-20 21:02:47.540278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:20:29.582 [2024-11-20 21:02:47.540297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-11-20 21:02:47.540333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-11-20 21:02:47.540351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:29.582 [2024-11-20 21:02:47.540400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:29.582 [2024-11-20 21:02:47.540421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-11-20 21:02:47.540444] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:29.582 [2024-11-20 21:02:47.540568] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:29.582 [2024-11-20 21:02:47.540665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:29.582 [2024-11-20 21:02:47.540723] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:29.582 [2024-11-20 21:02:47.540765] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:29.582 [2024-11-20 21:02:47.540793] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:29.582 [2024-11-20 21:02:47.540845] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:29.582 [2024-11-20 21:02:47.540867] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:29.582 [2024-11-20 21:02:47.540884] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:29.582 [2024-11-20 21:02:47.540903] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:29.582 [2024-11-20 21:02:47.540938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-11-20 21:02:47.540957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:29.582 [2024-11-20 21:02:47.540974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:20:29.582 [2024-11-20 21:02:47.540990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-11-20 21:02:47.541068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.582 [2024-11-20 21:02:47.541094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:29.582 [2024-11-20 21:02:47.541143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:29.582 [2024-11-20 21:02:47.541158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.582 [2024-11-20 21:02:47.541245] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:29.582 [2024-11-20 21:02:47.541264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:29.582 [2024-11-20 21:02:47.541280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.582 [2024-11-20 21:02:47.541331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.582 [2024-11-20 21:02:47.541349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:29.582 [2024-11-20 21:02:47.541365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:29.582 [2024-11-20 21:02:47.541379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:29.582 [2024-11-20 21:02:47.541394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:29.582 [2024-11-20 21:02:47.541409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:29.582 [2024-11-20 21:02:47.541452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.582 [2024-11-20 21:02:47.541469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:29.582 [2024-11-20 21:02:47.541485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:29.582 [2024-11-20 21:02:47.541572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.582 [2024-11-20 21:02:47.541592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:29.582 [2024-11-20 21:02:47.541607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:29.582 [2024-11-20 21:02:47.541622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.582 [2024-11-20 21:02:47.541638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:29.582 [2024-11-20 21:02:47.541653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:29.582 [2024-11-20 21:02:47.541667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.582 [2024-11-20 21:02:47.541682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:29.582 [2024-11-20 21:02:47.541696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:29.582 [2024-11-20 21:02:47.541738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.582 [2024-11-20 21:02:47.541790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:29.582 [2024-11-20 21:02:47.541810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:29.582 [2024-11-20 21:02:47.541844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.582 [2024-11-20 21:02:47.541862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:29.582 [2024-11-20 21:02:47.541878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:29.582 [2024-11-20 21:02:47.541895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.582 [2024-11-20 21:02:47.541909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:29.582 [2024-11-20 21:02:47.541930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:29.582 [2024-11-20 21:02:47.541946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.582 [2024-11-20 21:02:47.541961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:29.582 [2024-11-20 21:02:47.541975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:29.582 [2024-11-20 21:02:47.542021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.582 [2024-11-20 21:02:47.542037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:29.582 [2024-11-20 21:02:47.542053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:29.582 [2024-11-20 21:02:47.542066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.582 [2024-11-20 21:02:47.542081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:29.582 [2024-11-20 21:02:47.542095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:29.582 [2024-11-20 21:02:47.542112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.582 [2024-11-20 21:02:47.542126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:29.582 [2024-11-20 21:02:47.542141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:29.582 [2024-11-20 21:02:47.542183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.582 [2024-11-20 21:02:47.542192] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:29.582 [2024-11-20 21:02:47.542198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:29.582 [2024-11-20 21:02:47.542207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.582 [2024-11-20 21:02:47.542217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.582 [2024-11-20 21:02:47.542225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:29.582 [2024-11-20 21:02:47.542231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:29.582 [2024-11-20 21:02:47.542238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:29.582 [2024-11-20 21:02:47.542243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:29.582 [2024-11-20 21:02:47.542258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:29.582 [2024-11-20 21:02:47.542264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:29.582 [2024-11-20 21:02:47.542274] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:29.582 [2024-11-20 21:02:47.542283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.582 [2024-11-20 21:02:47.542291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:29.582 [2024-11-20 21:02:47.542297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:29.582 [2024-11-20 21:02:47.542305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:29.582 [2024-11-20 21:02:47.542311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:29.582 [2024-11-20 21:02:47.542318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:29.582 [2024-11-20 21:02:47.542324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:29.582 [2024-11-20 21:02:47.542332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:29.582 [2024-11-20 21:02:47.542337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:29.582 [2024-11-20 21:02:47.542344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:29.582 [2024-11-20 21:02:47.542350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:29.583 [2024-11-20 21:02:47.542357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:29.583 [2024-11-20 21:02:47.542362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:29.583 [2024-11-20 21:02:47.542370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:29.583 [2024-11-20 21:02:47.542375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:29.583 [2024-11-20 21:02:47.542382] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:29.583 [2024-11-20 21:02:47.542389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.583 [2024-11-20 21:02:47.542397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:29.583 [2024-11-20 21:02:47.542402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:29.583 [2024-11-20 21:02:47.542409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:29.583 [2024-11-20 21:02:47.542414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:29.583 [2024-11-20 21:02:47.542423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.583 [2024-11-20 21:02:47.542429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:29.583 [2024-11-20 21:02:47.542440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:20:29.583 [2024-11-20 21:02:47.542445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.583 [2024-11-20 21:02:47.542482] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:29.583 [2024-11-20 21:02:47.542489] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:32.896 [2024-11-20 21:02:50.848376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:50.848710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:32.896 [2024-11-20 21:02:50.848740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3305.874 ms 00:20:32.896 [2024-11-20 21:02:50.848774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:50.861878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:50.861934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.896 [2024-11-20 21:02:50.861951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.980 ms 00:20:32.896 [2024-11-20 21:02:50.861960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:50.862094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:50.862104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:32.896 [2024-11-20 21:02:50.862116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:32.896 [2024-11-20 21:02:50.862127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:50.874678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:50.874731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.896 [2024-11-20 21:02:50.874768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.496 ms 00:20:32.896 [2024-11-20 21:02:50.874835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:50.874894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:50.874905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.896 [2024-11-20 21:02:50.874916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:32.896 [2024-11-20 21:02:50.874926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:50.875417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:50.875462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.896 [2024-11-20 21:02:50.875477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:20:32.896 [2024-11-20 21:02:50.875486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:50.875623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:50.875639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.896 [2024-11-20 21:02:50.875653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:32.896 [2024-11-20 21:02:50.875662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:50.884120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:50.884174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.896 [2024-11-20 21:02:50.884193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.427 ms 00:20:32.896 [2024-11-20 21:02:50.884205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:50.894411] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:32.896 [2024-11-20 21:02:50.898393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:50.898442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:32.896 [2024-11-20 21:02:50.898455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.081 ms 00:20:32.896 [2024-11-20 21:02:50.898467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:51.000203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:51.000553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:32.896 [2024-11-20 21:02:51.000587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.699 ms 00:20:32.896 [2024-11-20 21:02:51.000608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:51.000934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:51.000963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:32.896 [2024-11-20 21:02:51.000991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:32.896 [2024-11-20 21:02:51.001008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.896 [2024-11-20 21:02:51.008141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.896 [2024-11-20 21:02:51.008207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:32.896 [2024-11-20 21:02:51.008229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.049 ms 00:20:32.896 [2024-11-20 21:02:51.008241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.157 [2024-11-20 21:02:51.014303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.157 [2024-11-20 21:02:51.014374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:33.157 [2024-11-20 21:02:51.014390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.006 ms 00:20:33.157 [2024-11-20 21:02:51.014406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.157 [2024-11-20 21:02:51.015027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.157 [2024-11-20 21:02:51.015257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:33.157 [2024-11-20 21:02:51.015285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:20:33.157 [2024-11-20 21:02:51.015306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.157 [2024-11-20 21:02:51.066626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.157 [2024-11-20 21:02:51.066950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:33.157 [2024-11-20 21:02:51.066979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.253 ms 00:20:33.157 [2024-11-20 21:02:51.066990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.157 [2024-11-20 21:02:51.075194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.157 [2024-11-20 21:02:51.075259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:33.157 [2024-11-20 21:02:51.075272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.137 ms 00:20:33.157 [2024-11-20 21:02:51.075284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.157 [2024-11-20 21:02:51.081743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.157 [2024-11-20 21:02:51.081812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:33.157 [2024-11-20 21:02:51.081823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.404 ms 00:20:33.157 [2024-11-20 21:02:51.081833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.157 [2024-11-20 21:02:51.089082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.157 [2024-11-20 21:02:51.089296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:33.157 [2024-11-20 21:02:51.089317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.181 ms 00:20:33.157 [2024-11-20 21:02:51.089332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.157 [2024-11-20 21:02:51.089383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.157 [2024-11-20 21:02:51.089396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:33.157 [2024-11-20 21:02:51.089406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:33.157 [2024-11-20 21:02:51.089416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.157 [2024-11-20 21:02:51.089494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.157 [2024-11-20 21:02:51.089513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:33.157 [2024-11-20 21:02:51.089522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:33.157 [2024-11-20 21:02:51.089536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.157 [2024-11-20 21:02:51.090810] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3557.770 ms, result 0 00:20:33.157 { 00:20:33.157 "name": "ftl0", 00:20:33.157 "uuid": "d39077d7-7682-4ed0-b438-a9982d49a5b0" 00:20:33.157 } 00:20:33.157 21:02:51 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:33.157 21:02:51 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:33.419 21:02:51 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:33.419 21:02:51 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:33.419 [2024-11-20 21:02:51.516637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.419 [2024-11-20 21:02:51.516877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:33.419 [2024-11-20 21:02:51.516909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:33.419 [2024-11-20 21:02:51.516920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.419 [2024-11-20 21:02:51.516959] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.419 [2024-11-20 21:02:51.517713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.419 [2024-11-20 21:02:51.517791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:33.419 [2024-11-20 21:02:51.517811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:20:33.419 [2024-11-20 21:02:51.517827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.419 [2024-11-20 21:02:51.518186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.419 [2024-11-20 21:02:51.518217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:33.419 [2024-11-20 21:02:51.518227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:20:33.419 [2024-11-20 21:02:51.518262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.419 [2024-11-20 21:02:51.522676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.419 [2024-11-20 21:02:51.522770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:33.419 [2024-11-20 21:02:51.522787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.392 ms 00:20:33.419 [2024-11-20 21:02:51.522803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.419 [2024-11-20 21:02:51.529323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.419 [2024-11-20 21:02:51.529379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:33.419 [2024-11-20 21:02:51.529392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.481 ms 00:20:33.419 [2024-11-20 21:02:51.529407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.419 [2024-11-20 21:02:51.532603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.419 [2024-11-20 21:02:51.532818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:33.419 [2024-11-20 21:02:51.532838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:20:33.419 [2024-11-20 21:02:51.532849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.681 [2024-11-20 21:02:51.539154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.681 [2024-11-20 21:02:51.539228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:33.681 [2024-11-20 21:02:51.539244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.204 ms 00:20:33.681 [2024-11-20 21:02:51.539265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.681 [2024-11-20 21:02:51.539451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.681 [2024-11-20 21:02:51.539474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:33.681 [2024-11-20 21:02:51.539492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:20:33.681 [2024-11-20 21:02:51.539508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.681 [2024-11-20 21:02:51.542653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.681 [2024-11-20 21:02:51.542725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:33.681 [2024-11-20 21:02:51.542741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.116 ms 00:20:33.681 [2024-11-20 21:02:51.542782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.681 [2024-11-20 21:02:51.545897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.681 [2024-11-20 21:02:51.545961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:33.681 [2024-11-20 21:02:51.545971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:20:33.681 [2024-11-20 21:02:51.545984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.681 [2024-11-20 21:02:51.548394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.681 [2024-11-20 21:02:51.548455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:33.681 [2024-11-20 21:02:51.548478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.357 ms 00:20:33.681 [2024-11-20 21:02:51.548492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.681 [2024-11-20 21:02:51.551060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.681 [2024-11-20 21:02:51.551275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:33.681 [2024-11-20 21:02:51.551294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:20:33.681 [2024-11-20 21:02:51.551310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.681 [2024-11-20 21:02:51.551353] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:33.681 [2024-11-20 21:02:51.551383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:33.681 [2024-11-20 21:02:51.551649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.551998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:33.682 [2024-11-20 21:02:51.552486] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:33.682 [2024-11-20 21:02:51.552496] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d39077d7-7682-4ed0-b438-a9982d49a5b0 00:20:33.682 [2024-11-20 21:02:51.552507] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:33.682 [2024-11-20 21:02:51.552515] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:33.682 [2024-11-20 21:02:51.552525] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:33.682 [2024-11-20 21:02:51.552534] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:33.682 [2024-11-20 21:02:51.552550] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:33.682 [2024-11-20 21:02:51.552562] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:33.682 [2024-11-20 21:02:51.552572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:33.682 [2024-11-20 21:02:51.552581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:33.682 [2024-11-20 21:02:51.552589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:33.682 [2024-11-20 21:02:51.552598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.682 [2024-11-20 21:02:51.552608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:33.682 [2024-11-20 21:02:51.552620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:20:33.682 [2024-11-20 21:02:51.552630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.682 [2024-11-20 21:02:51.555181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.682 [2024-11-20 21:02:51.555230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:33.682 [2024-11-20 21:02:51.555242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:20:33.682 [2024-11-20 21:02:51.555260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.682 [2024-11-20 21:02:51.555390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.682 [2024-11-20 21:02:51.555408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:33.682 [2024-11-20 21:02:51.555425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:20:33.682 [2024-11-20 21:02:51.555435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.682 [2024-11-20 21:02:51.564666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.682 [2024-11-20 21:02:51.564734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.682 [2024-11-20 21:02:51.564777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.682 [2024-11-20 21:02:51.564794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.682 [2024-11-20 21:02:51.564865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.682 [2024-11-20 21:02:51.564879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.682 [2024-11-20 21:02:51.564888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.564900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.564985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.683 [2024-11-20 21:02:51.565003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.683 [2024-11-20 21:02:51.565014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.565029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.565047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.683 [2024-11-20 21:02:51.565059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.683 [2024-11-20 21:02:51.565068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.565079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.579547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.683 [2024-11-20 21:02:51.579610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.683 [2024-11-20 21:02:51.579622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.579635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.591275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.683 [2024-11-20 21:02:51.591344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.683 [2024-11-20 21:02:51.591356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.591366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.591444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.683 [2024-11-20 21:02:51.591468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.683 [2024-11-20 21:02:51.591477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.591488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.591541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.683 [2024-11-20 21:02:51.591554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.683 [2024-11-20 21:02:51.591563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.591573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.591648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.683 [2024-11-20 21:02:51.591660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.683 [2024-11-20 21:02:51.591670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.591682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.591716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.683 [2024-11-20 21:02:51.591732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:33.683 [2024-11-20 21:02:51.591740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.591784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.591829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.683 [2024-11-20 21:02:51.591845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.683 [2024-11-20 21:02:51.591854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.591866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.591922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.683 [2024-11-20 21:02:51.591936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.683 [2024-11-20 21:02:51.591949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.683 [2024-11-20 21:02:51.591959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.683 [2024-11-20 21:02:51.592175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 75.467 ms, result 0 00:20:33.683 true 00:20:33.683 21:02:51 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88159 00:20:33.683 21:02:51 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88159 ']' 00:20:33.683 21:02:51 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88159 00:20:33.683 21:02:51 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:33.683 21:02:51 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:33.683 21:02:51 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88159 00:20:33.683 killing process with pid 88159 00:20:33.683 21:02:51 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:33.683 21:02:51 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:33.683 21:02:51 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88159' 00:20:33.683 21:02:51 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88159 00:20:33.683 21:02:51 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88159 00:20:37.013 21:02:54 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:41.221 262144+0 records in 00:20:41.221 262144+0 records out 00:20:41.221 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.00367 s, 268 MB/s 00:20:41.221 21:02:58 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:43.136 21:03:00 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:43.136 [2024-11-20 21:03:01.008768] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:20:43.136 [2024-11-20 21:03:01.008903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88373 ] 00:20:43.136 [2024-11-20 21:03:01.154287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.136 [2024-11-20 21:03:01.184669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.401 [2024-11-20 21:03:01.296207] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.401 [2024-11-20 21:03:01.296285] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:43.401 [2024-11-20 21:03:01.457537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.401 [2024-11-20 21:03:01.457603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:43.401 [2024-11-20 21:03:01.457619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:43.401 [2024-11-20 21:03:01.457628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.401 [2024-11-20 21:03:01.457696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.401 [2024-11-20 21:03:01.457707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:43.401 [2024-11-20 21:03:01.457716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:43.401 [2024-11-20 21:03:01.457725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.401 [2024-11-20 21:03:01.457781] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:43.401 [2024-11-20 21:03:01.458094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:43.401 [2024-11-20 21:03:01.458118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.401 [2024-11-20 21:03:01.458131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:43.401 [2024-11-20 21:03:01.458143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:20:43.401 [2024-11-20 21:03:01.458155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.401 [2024-11-20 21:03:01.459979] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:43.401 [2024-11-20 21:03:01.464086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.401 [2024-11-20 21:03:01.464142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:43.401 [2024-11-20 21:03:01.464155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.109 ms 00:20:43.401 [2024-11-20 21:03:01.464164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.401 [2024-11-20 21:03:01.464252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.401 [2024-11-20 21:03:01.464266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:43.401 [2024-11-20 21:03:01.464276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:43.401 [2024-11-20 21:03:01.464284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.401 [2024-11-20 21:03:01.472993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.401 [2024-11-20 21:03:01.473039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:43.402 [2024-11-20 21:03:01.473065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.665 ms 00:20:43.402 [2024-11-20 21:03:01.473078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.402 [2024-11-20 21:03:01.473184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.402 [2024-11-20 21:03:01.473195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:43.402 [2024-11-20 21:03:01.473208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:20:43.402 [2024-11-20 21:03:01.473220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.402 [2024-11-20 21:03:01.473282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.402 [2024-11-20 21:03:01.473294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:43.402 [2024-11-20 21:03:01.473303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:43.402 [2024-11-20 21:03:01.473311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.402 [2024-11-20 21:03:01.473345] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:43.402 [2024-11-20 21:03:01.475590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.402 [2024-11-20 21:03:01.475633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:43.402 [2024-11-20 21:03:01.475644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.256 ms 00:20:43.402 [2024-11-20 21:03:01.475662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.402 [2024-11-20 21:03:01.475698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.402 [2024-11-20 21:03:01.475709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:43.402 [2024-11-20 21:03:01.475717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:43.402 [2024-11-20 21:03:01.475725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.402 [2024-11-20 21:03:01.475785] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:43.402 [2024-11-20 21:03:01.475810] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:43.402 [2024-11-20 21:03:01.475859] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:43.402 [2024-11-20 21:03:01.475877] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:43.402 [2024-11-20 21:03:01.475997] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:43.402 [2024-11-20 21:03:01.476015] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:43.402 [2024-11-20 21:03:01.476026] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:43.402 [2024-11-20 21:03:01.476040] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476054] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476068] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:43.402 [2024-11-20 21:03:01.476081] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:43.402 [2024-11-20 21:03:01.476089] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:43.402 [2024-11-20 21:03:01.476099] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:43.402 [2024-11-20 21:03:01.476108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.402 [2024-11-20 21:03:01.476117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:43.402 [2024-11-20 21:03:01.476125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:20:43.402 [2024-11-20 21:03:01.476138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.402 [2024-11-20 21:03:01.476222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.402 [2024-11-20 21:03:01.476258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:43.402 [2024-11-20 21:03:01.476267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:43.402 [2024-11-20 21:03:01.476279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.402 [2024-11-20 21:03:01.476391] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:43.402 [2024-11-20 21:03:01.476413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:43.402 [2024-11-20 21:03:01.476424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:43.402 [2024-11-20 21:03:01.476458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:43.402 [2024-11-20 21:03:01.476485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.402 [2024-11-20 21:03:01.476506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:43.402 [2024-11-20 21:03:01.476515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:43.402 [2024-11-20 21:03:01.476523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:43.402 [2024-11-20 21:03:01.476531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:43.402 [2024-11-20 21:03:01.476540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:43.402 [2024-11-20 21:03:01.476548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:43.402 [2024-11-20 21:03:01.476567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:43.402 [2024-11-20 21:03:01.476595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:43.402 [2024-11-20 21:03:01.476619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:43.402 [2024-11-20 21:03:01.476645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:43.402 [2024-11-20 21:03:01.476665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:43.402 [2024-11-20 21:03:01.476688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.402 [2024-11-20 21:03:01.476701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:43.402 [2024-11-20 21:03:01.476707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:43.402 [2024-11-20 21:03:01.476714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:43.402 [2024-11-20 21:03:01.476722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:43.402 [2024-11-20 21:03:01.476729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:43.402 [2024-11-20 21:03:01.476737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:43.402 [2024-11-20 21:03:01.476764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:43.402 [2024-11-20 21:03:01.476774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476781] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:43.402 [2024-11-20 21:03:01.476794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:43.402 [2024-11-20 21:03:01.476805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:43.402 [2024-11-20 21:03:01.476822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:43.402 [2024-11-20 21:03:01.476828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:43.402 [2024-11-20 21:03:01.476835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:43.402 [2024-11-20 21:03:01.476845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:43.402 [2024-11-20 21:03:01.476852] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:43.402 [2024-11-20 21:03:01.476861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:43.402 [2024-11-20 21:03:01.476870] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:43.402 [2024-11-20 21:03:01.476880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.402 [2024-11-20 21:03:01.476889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:43.402 [2024-11-20 21:03:01.476897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:43.402 [2024-11-20 21:03:01.476904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:43.402 [2024-11-20 21:03:01.476915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:43.402 [2024-11-20 21:03:01.476923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:43.402 [2024-11-20 21:03:01.476931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:43.402 [2024-11-20 21:03:01.476938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:43.402 [2024-11-20 21:03:01.476946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:43.402 [2024-11-20 21:03:01.476954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:43.402 [2024-11-20 21:03:01.476962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:43.402 [2024-11-20 21:03:01.476970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:43.402 [2024-11-20 21:03:01.476983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:43.402 [2024-11-20 21:03:01.476992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:43.402 [2024-11-20 21:03:01.477000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:43.402 [2024-11-20 21:03:01.477007] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:43.402 [2024-11-20 21:03:01.477015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:43.402 [2024-11-20 21:03:01.477024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:43.402 [2024-11-20 21:03:01.477032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:43.402 [2024-11-20 21:03:01.477040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:43.402 [2024-11-20 21:03:01.477051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:43.402 [2024-11-20 21:03:01.477059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.402 [2024-11-20 21:03:01.477066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:43.402 [2024-11-20 21:03:01.477074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:20:43.402 [2024-11-20 21:03:01.477082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.402 [2024-11-20 21:03:01.492424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.402 [2024-11-20 21:03:01.492475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:43.402 [2024-11-20 21:03:01.492488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.273 ms 00:20:43.402 [2024-11-20 21:03:01.492499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.402 [2024-11-20 21:03:01.492592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.402 [2024-11-20 21:03:01.492601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:43.402 [2024-11-20 21:03:01.492611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:43.402 [2024-11-20 21:03:01.492619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.663 [2024-11-20 21:03:01.515589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.663 [2024-11-20 21:03:01.515651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:43.663 [2024-11-20 21:03:01.515667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.901 ms 00:20:43.663 [2024-11-20 21:03:01.515678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.663 [2024-11-20 21:03:01.515741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.663 [2024-11-20 21:03:01.515784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:43.663 [2024-11-20 21:03:01.515794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:43.663 [2024-11-20 21:03:01.515808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.663 [2024-11-20 21:03:01.516415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.663 [2024-11-20 21:03:01.516449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:43.663 [2024-11-20 21:03:01.516463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:20:43.663 [2024-11-20 21:03:01.516474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.663 [2024-11-20 21:03:01.516650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.663 [2024-11-20 21:03:01.516665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:43.663 [2024-11-20 21:03:01.516676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:20:43.664 [2024-11-20 21:03:01.516687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.525392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.525453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:43.664 [2024-11-20 21:03:01.525471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.677 ms 00:20:43.664 [2024-11-20 21:03:01.525485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.529894] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:43.664 [2024-11-20 21:03:01.529950] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:43.664 [2024-11-20 21:03:01.529972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.529982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:43.664 [2024-11-20 21:03:01.529992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.380 ms 00:20:43.664 [2024-11-20 21:03:01.530002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.546281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.546339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:43.664 [2024-11-20 21:03:01.546357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.211 ms 00:20:43.664 [2024-11-20 21:03:01.546367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.549607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.549838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:43.664 [2024-11-20 21:03:01.549860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.179 ms 00:20:43.664 [2024-11-20 21:03:01.549868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.552923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.553115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:43.664 [2024-11-20 21:03:01.553134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.935 ms 00:20:43.664 [2024-11-20 21:03:01.553143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.553493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.553509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:43.664 [2024-11-20 21:03:01.553520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:20:43.664 [2024-11-20 21:03:01.553528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.579516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.579651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:43.664 [2024-11-20 21:03:01.579669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.968 ms 00:20:43.664 [2024-11-20 21:03:01.579678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.588590] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:43.664 [2024-11-20 21:03:01.591977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.592021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:43.664 [2024-11-20 21:03:01.592048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.182 ms 00:20:43.664 [2024-11-20 21:03:01.592057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.592140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.592152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:43.664 [2024-11-20 21:03:01.592163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:43.664 [2024-11-20 21:03:01.592171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.592246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.592268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:43.664 [2024-11-20 21:03:01.592277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:43.664 [2024-11-20 21:03:01.592285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.592316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.592326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:43.664 [2024-11-20 21:03:01.592335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:43.664 [2024-11-20 21:03:01.592352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.592390] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:43.664 [2024-11-20 21:03:01.592405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.592415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:43.664 [2024-11-20 21:03:01.592424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:43.664 [2024-11-20 21:03:01.592432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.598399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.598599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:43.664 [2024-11-20 21:03:01.598621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.943 ms 00:20:43.664 [2024-11-20 21:03:01.598629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.598713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.664 [2024-11-20 21:03:01.598727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:43.664 [2024-11-20 21:03:01.598737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:43.664 [2024-11-20 21:03:01.598768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.664 [2024-11-20 21:03:01.600639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 142.248 ms, result 0 00:20:44.606  [2024-11-20T21:03:03.668Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-20T21:03:05.052Z] Copying: 36/1024 [MB] (19 MBps) [2024-11-20T21:03:05.623Z] Copying: 55/1024 [MB] (19 MBps) [2024-11-20T21:03:07.009Z] Copying: 72/1024 [MB] (16 MBps) [2024-11-20T21:03:07.950Z] Copying: 84/1024 [MB] (12 MBps) [2024-11-20T21:03:08.891Z] Copying: 97/1024 [MB] (12 MBps) [2024-11-20T21:03:09.831Z] Copying: 111/1024 [MB] (14 MBps) [2024-11-20T21:03:10.774Z] Copying: 125/1024 [MB] (13 MBps) [2024-11-20T21:03:11.717Z] Copying: 137/1024 [MB] (12 MBps) [2024-11-20T21:03:12.659Z] Copying: 153/1024 [MB] (15 MBps) [2024-11-20T21:03:14.046Z] Copying: 169/1024 [MB] (15 MBps) [2024-11-20T21:03:14.620Z] Copying: 179/1024 [MB] (10 MBps) [2024-11-20T21:03:16.008Z] Copying: 203/1024 [MB] (23 MBps) [2024-11-20T21:03:16.951Z] Copying: 237/1024 [MB] (34 MBps) [2024-11-20T21:03:17.922Z] Copying: 267/1024 [MB] (29 MBps) [2024-11-20T21:03:18.865Z] Copying: 290/1024 [MB] (22 MBps) [2024-11-20T21:03:19.809Z] Copying: 309/1024 [MB] (19 MBps) [2024-11-20T21:03:20.755Z] Copying: 337/1024 [MB] (28 MBps) [2024-11-20T21:03:21.701Z] Copying: 358/1024 [MB] (20 MBps) [2024-11-20T21:03:22.645Z] Copying: 375/1024 [MB] (16 MBps) [2024-11-20T21:03:24.035Z] Copying: 393/1024 [MB] (18 MBps) [2024-11-20T21:03:24.629Z] Copying: 404/1024 [MB] (11 MBps) [2024-11-20T21:03:26.017Z] Copying: 437/1024 [MB] (32 MBps) [2024-11-20T21:03:26.958Z] Copying: 455/1024 [MB] (17 MBps) [2024-11-20T21:03:27.896Z] Copying: 471/1024 [MB] (16 MBps) [2024-11-20T21:03:28.835Z] Copying: 485/1024 [MB] (13 MBps) [2024-11-20T21:03:29.789Z] Copying: 497/1024 [MB] (12 MBps) [2024-11-20T21:03:30.731Z] Copying: 515/1024 [MB] (18 MBps) [2024-11-20T21:03:31.675Z] Copying: 536/1024 [MB] (20 MBps) [2024-11-20T21:03:32.618Z] Copying: 554/1024 [MB] (17 MBps) [2024-11-20T21:03:34.008Z] Copying: 568/1024 [MB] (14 MBps) [2024-11-20T21:03:34.951Z] Copying: 578/1024 [MB] (10 MBps) [2024-11-20T21:03:35.895Z] Copying: 608/1024 [MB] (29 MBps) [2024-11-20T21:03:36.840Z] Copying: 633/1024 [MB] (24 MBps) [2024-11-20T21:03:37.784Z] Copying: 649/1024 [MB] (15 MBps) [2024-11-20T21:03:38.728Z] Copying: 667/1024 [MB] (18 MBps) [2024-11-20T21:03:39.672Z] Copying: 680/1024 [MB] (13 MBps) [2024-11-20T21:03:41.055Z] Copying: 722/1024 [MB] (41 MBps) [2024-11-20T21:03:41.629Z] Copying: 762/1024 [MB] (40 MBps) [2024-11-20T21:03:43.017Z] Copying: 775/1024 [MB] (12 MBps) [2024-11-20T21:03:43.963Z] Copying: 789/1024 [MB] (14 MBps) [2024-11-20T21:03:44.904Z] Copying: 803/1024 [MB] (13 MBps) [2024-11-20T21:03:45.848Z] Copying: 815/1024 [MB] (12 MBps) [2024-11-20T21:03:46.792Z] Copying: 832/1024 [MB] (17 MBps) [2024-11-20T21:03:47.735Z] Copying: 855/1024 [MB] (22 MBps) [2024-11-20T21:03:48.678Z] Copying: 876/1024 [MB] (21 MBps) [2024-11-20T21:03:49.622Z] Copying: 899/1024 [MB] (23 MBps) [2024-11-20T21:03:51.012Z] Copying: 912/1024 [MB] (12 MBps) [2024-11-20T21:03:51.956Z] Copying: 928/1024 [MB] (16 MBps) [2024-11-20T21:03:52.901Z] Copying: 945/1024 [MB] (16 MBps) [2024-11-20T21:03:53.912Z] Copying: 963/1024 [MB] (17 MBps) [2024-11-20T21:03:54.854Z] Copying: 979/1024 [MB] (15 MBps) [2024-11-20T21:03:55.798Z] Copying: 998/1024 [MB] (19 MBps) [2024-11-20T21:03:55.798Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-20 21:03:55.522383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.522417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:37.679 [2024-11-20 21:03:55.522428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:37.679 [2024-11-20 21:03:55.522435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.522454] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:37.679 [2024-11-20 21:03:55.522840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.522856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:37.679 [2024-11-20 21:03:55.522864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:21:37.679 [2024-11-20 21:03:55.522871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.524176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.524281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:37.679 [2024-11-20 21:03:55.524294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:21:37.679 [2024-11-20 21:03:55.524300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.539568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.539682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:37.679 [2024-11-20 21:03:55.539695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.254 ms 00:21:37.679 [2024-11-20 21:03:55.539701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.544503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.544525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:37.679 [2024-11-20 21:03:55.544539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.779 ms 00:21:37.679 [2024-11-20 21:03:55.544545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.546424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.546517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:37.679 [2024-11-20 21:03:55.546529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.839 ms 00:21:37.679 [2024-11-20 21:03:55.546535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.550172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.550198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:37.679 [2024-11-20 21:03:55.550205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.611 ms 00:21:37.679 [2024-11-20 21:03:55.550211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.550299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.550306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:37.679 [2024-11-20 21:03:55.550313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:21:37.679 [2024-11-20 21:03:55.550324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.552591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.552614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:37.679 [2024-11-20 21:03:55.552621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.256 ms 00:21:37.679 [2024-11-20 21:03:55.552626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.554639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.554662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:37.679 [2024-11-20 21:03:55.554669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.991 ms 00:21:37.679 [2024-11-20 21:03:55.554674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.556281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.556371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:37.679 [2024-11-20 21:03:55.556381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:21:37.679 [2024-11-20 21:03:55.556386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.557734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.679 [2024-11-20 21:03:55.557768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:37.679 [2024-11-20 21:03:55.557775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.309 ms 00:21:37.679 [2024-11-20 21:03:55.557780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.679 [2024-11-20 21:03:55.557800] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:37.679 [2024-11-20 21:03:55.557814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.557997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:37.680 [2024-11-20 21:03:55.558342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:37.681 [2024-11-20 21:03:55.558413] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:37.681 [2024-11-20 21:03:55.558420] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d39077d7-7682-4ed0-b438-a9982d49a5b0 00:21:37.681 [2024-11-20 21:03:55.558426] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:37.681 [2024-11-20 21:03:55.558431] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:37.681 [2024-11-20 21:03:55.558437] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:37.681 [2024-11-20 21:03:55.558442] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:37.681 [2024-11-20 21:03:55.558448] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:37.681 [2024-11-20 21:03:55.558455] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:37.681 [2024-11-20 21:03:55.558460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:37.681 [2024-11-20 21:03:55.558464] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:37.681 [2024-11-20 21:03:55.558469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:37.681 [2024-11-20 21:03:55.558474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.681 [2024-11-20 21:03:55.558480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:37.681 [2024-11-20 21:03:55.558486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:21:37.681 [2024-11-20 21:03:55.558496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.559677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.681 [2024-11-20 21:03:55.559728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:37.681 [2024-11-20 21:03:55.559736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:21:37.681 [2024-11-20 21:03:55.559742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.559816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.681 [2024-11-20 21:03:55.559826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:37.681 [2024-11-20 21:03:55.559833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:37.681 [2024-11-20 21:03:55.559838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.563878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.563903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.681 [2024-11-20 21:03:55.563911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.563916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.563954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.563966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.681 [2024-11-20 21:03:55.563972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.563980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.564012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.564019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.681 [2024-11-20 21:03:55.564025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.564030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.564041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.564046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.681 [2024-11-20 21:03:55.564055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.564060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.571447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.571485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.681 [2024-11-20 21:03:55.571493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.571499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.577567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.577688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.681 [2024-11-20 21:03:55.577738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.577766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.577811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.577833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.681 [2024-11-20 21:03:55.577849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.577863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.577890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.578013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.681 [2024-11-20 21:03:55.578022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.578028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.578085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.578093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.681 [2024-11-20 21:03:55.578100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.578106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.578129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.578137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:37.681 [2024-11-20 21:03:55.578144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.578150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.578185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.578192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.681 [2024-11-20 21:03:55.578199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.578205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.578249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:37.681 [2024-11-20 21:03:55.578258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.681 [2024-11-20 21:03:55.578264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:37.681 [2024-11-20 21:03:55.578272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.681 [2024-11-20 21:03:55.578364] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.954 ms, result 0 00:21:37.942 00:21:37.942 00:21:37.942 21:03:55 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:37.942 [2024-11-20 21:03:56.048806] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:21:37.942 [2024-11-20 21:03:56.048926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88939 ] 00:21:38.202 [2024-11-20 21:03:56.189870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.202 [2024-11-20 21:03:56.206395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.202 [2024-11-20 21:03:56.286805] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:38.202 [2024-11-20 21:03:56.286857] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:38.463 [2024-11-20 21:03:56.433557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.463 [2024-11-20 21:03:56.433719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:38.463 [2024-11-20 21:03:56.433734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:38.463 [2024-11-20 21:03:56.433742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.463 [2024-11-20 21:03:56.433803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.463 [2024-11-20 21:03:56.433814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:38.463 [2024-11-20 21:03:56.433821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:38.463 [2024-11-20 21:03:56.433826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.463 [2024-11-20 21:03:56.433844] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:38.463 [2024-11-20 21:03:56.434023] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:38.463 [2024-11-20 21:03:56.434035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.463 [2024-11-20 21:03:56.434043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:38.463 [2024-11-20 21:03:56.434050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:21:38.463 [2024-11-20 21:03:56.434058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.463 [2024-11-20 21:03:56.435005] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:38.463 [2024-11-20 21:03:56.437115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.464 [2024-11-20 21:03:56.437141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:38.464 [2024-11-20 21:03:56.437150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.111 ms 00:21:38.464 [2024-11-20 21:03:56.437156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.464 [2024-11-20 21:03:56.437199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.464 [2024-11-20 21:03:56.437207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:38.464 [2024-11-20 21:03:56.437215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:38.464 [2024-11-20 21:03:56.437221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.464 [2024-11-20 21:03:56.441448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.464 [2024-11-20 21:03:56.441470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:38.464 [2024-11-20 21:03:56.441478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.193 ms 00:21:38.464 [2024-11-20 21:03:56.441491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.464 [2024-11-20 21:03:56.441552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.464 [2024-11-20 21:03:56.441560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:38.464 [2024-11-20 21:03:56.441566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:38.464 [2024-11-20 21:03:56.441571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.464 [2024-11-20 21:03:56.441610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.464 [2024-11-20 21:03:56.441617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:38.464 [2024-11-20 21:03:56.441628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:38.464 [2024-11-20 21:03:56.441636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.464 [2024-11-20 21:03:56.441653] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:38.464 [2024-11-20 21:03:56.442817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.464 [2024-11-20 21:03:56.442837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:38.464 [2024-11-20 21:03:56.442844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:21:38.464 [2024-11-20 21:03:56.442851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.464 [2024-11-20 21:03:56.442879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.464 [2024-11-20 21:03:56.442886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:38.464 [2024-11-20 21:03:56.442892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:38.464 [2024-11-20 21:03:56.442898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.464 [2024-11-20 21:03:56.442917] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:38.464 [2024-11-20 21:03:56.442935] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:38.464 [2024-11-20 21:03:56.442965] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:38.464 [2024-11-20 21:03:56.442977] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:38.464 [2024-11-20 21:03:56.443057] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:38.464 [2024-11-20 21:03:56.443065] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:38.464 [2024-11-20 21:03:56.443073] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:38.464 [2024-11-20 21:03:56.443085] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:38.464 [2024-11-20 21:03:56.443092] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:38.464 [2024-11-20 21:03:56.443098] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:38.464 [2024-11-20 21:03:56.443104] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:38.464 [2024-11-20 21:03:56.443112] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:38.464 [2024-11-20 21:03:56.443118] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:38.464 [2024-11-20 21:03:56.443126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.464 [2024-11-20 21:03:56.443132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:38.464 [2024-11-20 21:03:56.443138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:21:38.464 [2024-11-20 21:03:56.443143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.464 [2024-11-20 21:03:56.443208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.464 [2024-11-20 21:03:56.443216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:38.464 [2024-11-20 21:03:56.443222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:38.464 [2024-11-20 21:03:56.443228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.464 [2024-11-20 21:03:56.443302] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:38.464 [2024-11-20 21:03:56.443310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:38.464 [2024-11-20 21:03:56.443320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.464 [2024-11-20 21:03:56.443329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:38.464 [2024-11-20 21:03:56.443344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:38.464 [2024-11-20 21:03:56.443357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:38.464 [2024-11-20 21:03:56.443363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.464 [2024-11-20 21:03:56.443373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:38.464 [2024-11-20 21:03:56.443378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:38.464 [2024-11-20 21:03:56.443385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:38.464 [2024-11-20 21:03:56.443391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:38.464 [2024-11-20 21:03:56.443396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:38.464 [2024-11-20 21:03:56.443401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:38.464 [2024-11-20 21:03:56.443411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:38.464 [2024-11-20 21:03:56.443416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:38.464 [2024-11-20 21:03:56.443426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.464 [2024-11-20 21:03:56.443437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:38.464 [2024-11-20 21:03:56.443441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.464 [2024-11-20 21:03:56.443451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:38.464 [2024-11-20 21:03:56.443456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.464 [2024-11-20 21:03:56.443468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:38.464 [2024-11-20 21:03:56.443474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:38.464 [2024-11-20 21:03:56.443485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:38.464 [2024-11-20 21:03:56.443491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.464 [2024-11-20 21:03:56.443502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:38.464 [2024-11-20 21:03:56.443508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:38.464 [2024-11-20 21:03:56.443513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:38.464 [2024-11-20 21:03:56.443519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:38.464 [2024-11-20 21:03:56.443525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:38.464 [2024-11-20 21:03:56.443531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:38.464 [2024-11-20 21:03:56.443543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:38.464 [2024-11-20 21:03:56.443549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.464 [2024-11-20 21:03:56.443555] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:38.465 [2024-11-20 21:03:56.443563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:38.465 [2024-11-20 21:03:56.443571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:38.465 [2024-11-20 21:03:56.443578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:38.465 [2024-11-20 21:03:56.443585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:38.465 [2024-11-20 21:03:56.443590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:38.465 [2024-11-20 21:03:56.443596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:38.465 [2024-11-20 21:03:56.443602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:38.465 [2024-11-20 21:03:56.443607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:38.465 [2024-11-20 21:03:56.443613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:38.465 [2024-11-20 21:03:56.443619] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:38.465 [2024-11-20 21:03:56.443628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.465 [2024-11-20 21:03:56.443635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:38.465 [2024-11-20 21:03:56.443641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:38.465 [2024-11-20 21:03:56.443647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:38.465 [2024-11-20 21:03:56.443653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:38.465 [2024-11-20 21:03:56.443659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:38.465 [2024-11-20 21:03:56.443666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:38.465 [2024-11-20 21:03:56.443672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:38.465 [2024-11-20 21:03:56.443678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:38.465 [2024-11-20 21:03:56.443685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:38.465 [2024-11-20 21:03:56.443692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:38.465 [2024-11-20 21:03:56.443698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:38.465 [2024-11-20 21:03:56.443708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:38.465 [2024-11-20 21:03:56.443714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:38.465 [2024-11-20 21:03:56.443720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:38.465 [2024-11-20 21:03:56.443726] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:38.465 [2024-11-20 21:03:56.443736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:38.465 [2024-11-20 21:03:56.443758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:38.465 [2024-11-20 21:03:56.443765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:38.465 [2024-11-20 21:03:56.443771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:38.465 [2024-11-20 21:03:56.443777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:38.465 [2024-11-20 21:03:56.443784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.465 [2024-11-20 21:03:56.443793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:38.465 [2024-11-20 21:03:56.443800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:21:38.465 [2024-11-20 21:03:56.443809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.465 [2024-11-20 21:03:56.451419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.465 [2024-11-20 21:03:56.451445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:38.465 [2024-11-20 21:03:56.451455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.577 ms 00:21:38.465 [2024-11-20 21:03:56.451464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.465 [2024-11-20 21:03:56.451526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.465 [2024-11-20 21:03:56.451532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:38.465 [2024-11-20 21:03:56.451539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:38.465 [2024-11-20 21:03:56.451544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.465 [2024-11-20 21:03:56.470611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.465 [2024-11-20 21:03:56.470678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:38.465 [2024-11-20 21:03:56.470709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.030 ms 00:21:38.465 [2024-11-20 21:03:56.470726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.465 [2024-11-20 21:03:56.470842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.465 [2024-11-20 21:03:56.470865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:38.465 [2024-11-20 21:03:56.470883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:38.465 [2024-11-20 21:03:56.470898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.465 [2024-11-20 21:03:56.471392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.465 [2024-11-20 21:03:56.471442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:38.465 [2024-11-20 21:03:56.471462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:21:38.465 [2024-11-20 21:03:56.471490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.465 [2024-11-20 21:03:56.471734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.465 [2024-11-20 21:03:56.471791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:38.465 [2024-11-20 21:03:56.471809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:21:38.465 [2024-11-20 21:03:56.471824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.465 [2024-11-20 21:03:56.478715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.465 [2024-11-20 21:03:56.478744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:38.465 [2024-11-20 21:03:56.478776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.857 ms 00:21:38.465 [2024-11-20 21:03:56.478782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.465 [2024-11-20 21:03:56.481131] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:38.465 [2024-11-20 21:03:56.481159] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:38.465 [2024-11-20 21:03:56.481170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.465 [2024-11-20 21:03:56.481176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:38.465 [2024-11-20 21:03:56.481183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.322 ms 00:21:38.465 [2024-11-20 21:03:56.481188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.465 [2024-11-20 21:03:56.492473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.465 [2024-11-20 21:03:56.492601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:38.465 [2024-11-20 21:03:56.492614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.252 ms 00:21:38.465 [2024-11-20 21:03:56.492620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.465 [2024-11-20 21:03:56.494255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.494280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:38.466 [2024-11-20 21:03:56.494286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.618 ms 00:21:38.466 [2024-11-20 21:03:56.494292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.495585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.495610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:38.466 [2024-11-20 21:03:56.495617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:21:38.466 [2024-11-20 21:03:56.495622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.495951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.495978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:38.466 [2024-11-20 21:03:56.495995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:21:38.466 [2024-11-20 21:03:56.496011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.510199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.510323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:38.466 [2024-11-20 21:03:56.510336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.136 ms 00:21:38.466 [2024-11-20 21:03:56.510343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.516042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:38.466 [2024-11-20 21:03:56.517873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.517895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:38.466 [2024-11-20 21:03:56.517906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.505 ms 00:21:38.466 [2024-11-20 21:03:56.517913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.517955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.517966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:38.466 [2024-11-20 21:03:56.517972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:38.466 [2024-11-20 21:03:56.517978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.518034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.518041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:38.466 [2024-11-20 21:03:56.518048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:38.466 [2024-11-20 21:03:56.518056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.518074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.518081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:38.466 [2024-11-20 21:03:56.518087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:38.466 [2024-11-20 21:03:56.518093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.518116] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:38.466 [2024-11-20 21:03:56.518123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.518131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:38.466 [2024-11-20 21:03:56.518136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:38.466 [2024-11-20 21:03:56.518142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.521424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.521450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:38.466 [2024-11-20 21:03:56.521463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:21:38.466 [2024-11-20 21:03:56.521469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.521521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.466 [2024-11-20 21:03:56.521528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:38.466 [2024-11-20 21:03:56.521538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:38.466 [2024-11-20 21:03:56.521544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.466 [2024-11-20 21:03:56.522300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.395 ms, result 0 00:21:39.857  [2024-11-20T21:03:58.919Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-20T21:03:59.863Z] Copying: 25/1024 [MB] (11 MBps) [2024-11-20T21:04:00.805Z] Copying: 47/1024 [MB] (21 MBps) [2024-11-20T21:04:01.747Z] Copying: 62/1024 [MB] (15 MBps) [2024-11-20T21:04:02.690Z] Copying: 83/1024 [MB] (20 MBps) [2024-11-20T21:04:04.078Z] Copying: 100/1024 [MB] (17 MBps) [2024-11-20T21:04:05.019Z] Copying: 111/1024 [MB] (10 MBps) [2024-11-20T21:04:05.963Z] Copying: 123/1024 [MB] (11 MBps) [2024-11-20T21:04:06.908Z] Copying: 133/1024 [MB] (10 MBps) [2024-11-20T21:04:07.854Z] Copying: 147/1024 [MB] (14 MBps) [2024-11-20T21:04:08.798Z] Copying: 164/1024 [MB] (16 MBps) [2024-11-20T21:04:09.750Z] Copying: 175/1024 [MB] (10 MBps) [2024-11-20T21:04:10.691Z] Copying: 196/1024 [MB] (21 MBps) [2024-11-20T21:04:12.077Z] Copying: 217/1024 [MB] (20 MBps) [2024-11-20T21:04:13.018Z] Copying: 228/1024 [MB] (10 MBps) [2024-11-20T21:04:13.963Z] Copying: 239/1024 [MB] (11 MBps) [2024-11-20T21:04:14.905Z] Copying: 250/1024 [MB] (10 MBps) [2024-11-20T21:04:15.850Z] Copying: 267/1024 [MB] (16 MBps) [2024-11-20T21:04:16.797Z] Copying: 278/1024 [MB] (10 MBps) [2024-11-20T21:04:17.739Z] Copying: 289/1024 [MB] (11 MBps) [2024-11-20T21:04:18.682Z] Copying: 306/1024 [MB] (17 MBps) [2024-11-20T21:04:20.071Z] Copying: 317/1024 [MB] (10 MBps) [2024-11-20T21:04:21.016Z] Copying: 330/1024 [MB] (13 MBps) [2024-11-20T21:04:21.960Z] Copying: 349/1024 [MB] (18 MBps) [2024-11-20T21:04:23.005Z] Copying: 363/1024 [MB] (14 MBps) [2024-11-20T21:04:23.950Z] Copying: 378/1024 [MB] (14 MBps) [2024-11-20T21:04:24.894Z] Copying: 388/1024 [MB] (10 MBps) [2024-11-20T21:04:25.839Z] Copying: 399/1024 [MB] (10 MBps) [2024-11-20T21:04:26.783Z] Copying: 410/1024 [MB] (10 MBps) [2024-11-20T21:04:27.727Z] Copying: 424/1024 [MB] (14 MBps) [2024-11-20T21:04:28.670Z] Copying: 435/1024 [MB] (10 MBps) [2024-11-20T21:04:30.057Z] Copying: 446/1024 [MB] (10 MBps) [2024-11-20T21:04:31.001Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-20T21:04:31.946Z] Copying: 470/1024 [MB] (14 MBps) [2024-11-20T21:04:32.890Z] Copying: 484/1024 [MB] (13 MBps) [2024-11-20T21:04:33.834Z] Copying: 500/1024 [MB] (15 MBps) [2024-11-20T21:04:34.804Z] Copying: 510/1024 [MB] (10 MBps) [2024-11-20T21:04:35.748Z] Copying: 523/1024 [MB] (12 MBps) [2024-11-20T21:04:36.692Z] Copying: 537/1024 [MB] (13 MBps) [2024-11-20T21:04:38.077Z] Copying: 549/1024 [MB] (11 MBps) [2024-11-20T21:04:39.023Z] Copying: 560/1024 [MB] (10 MBps) [2024-11-20T21:04:39.966Z] Copying: 570/1024 [MB] (10 MBps) [2024-11-20T21:04:40.911Z] Copying: 581/1024 [MB] (10 MBps) [2024-11-20T21:04:41.856Z] Copying: 591/1024 [MB] (10 MBps) [2024-11-20T21:04:42.801Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-20T21:04:43.747Z] Copying: 613/1024 [MB] (10 MBps) [2024-11-20T21:04:44.691Z] Copying: 623/1024 [MB] (10 MBps) [2024-11-20T21:04:46.078Z] Copying: 634/1024 [MB] (10 MBps) [2024-11-20T21:04:47.024Z] Copying: 645/1024 [MB] (11 MBps) [2024-11-20T21:04:47.969Z] Copying: 656/1024 [MB] (11 MBps) [2024-11-20T21:04:48.914Z] Copying: 667/1024 [MB] (10 MBps) [2024-11-20T21:04:49.858Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-20T21:04:50.802Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-20T21:04:51.803Z] Copying: 699/1024 [MB] (10 MBps) [2024-11-20T21:04:52.749Z] Copying: 711/1024 [MB] (12 MBps) [2024-11-20T21:04:53.692Z] Copying: 724/1024 [MB] (13 MBps) [2024-11-20T21:04:55.075Z] Copying: 740/1024 [MB] (15 MBps) [2024-11-20T21:04:56.020Z] Copying: 751/1024 [MB] (11 MBps) [2024-11-20T21:04:56.965Z] Copying: 769/1024 [MB] (17 MBps) [2024-11-20T21:04:57.911Z] Copying: 780/1024 [MB] (11 MBps) [2024-11-20T21:04:58.855Z] Copying: 791/1024 [MB] (10 MBps) [2024-11-20T21:04:59.798Z] Copying: 802/1024 [MB] (11 MBps) [2024-11-20T21:05:00.742Z] Copying: 821/1024 [MB] (19 MBps) [2024-11-20T21:05:01.692Z] Copying: 836/1024 [MB] (14 MBps) [2024-11-20T21:05:03.079Z] Copying: 847/1024 [MB] (11 MBps) [2024-11-20T21:05:04.025Z] Copying: 858/1024 [MB] (10 MBps) [2024-11-20T21:05:04.970Z] Copying: 874/1024 [MB] (16 MBps) [2024-11-20T21:05:05.915Z] Copying: 893/1024 [MB] (18 MBps) [2024-11-20T21:05:06.862Z] Copying: 914/1024 [MB] (20 MBps) [2024-11-20T21:05:07.808Z] Copying: 930/1024 [MB] (16 MBps) [2024-11-20T21:05:08.752Z] Copying: 943/1024 [MB] (13 MBps) [2024-11-20T21:05:09.695Z] Copying: 959/1024 [MB] (15 MBps) [2024-11-20T21:05:11.081Z] Copying: 974/1024 [MB] (15 MBps) [2024-11-20T21:05:11.655Z] Copying: 985/1024 [MB] (10 MBps) [2024-11-20T21:05:13.044Z] Copying: 996/1024 [MB] (10 MBps) [2024-11-20T21:05:13.616Z] Copying: 1007/1024 [MB] (11 MBps) [2024-11-20T21:05:13.616Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-20 21:05:13.607468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.497 [2024-11-20 21:05:13.607738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:55.497 [2024-11-20 21:05:13.607781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:55.497 [2024-11-20 21:05:13.607791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.497 [2024-11-20 21:05:13.607839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:55.497 [2024-11-20 21:05:13.608603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.497 [2024-11-20 21:05:13.608628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:55.497 [2024-11-20 21:05:13.608638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:22:55.497 [2024-11-20 21:05:13.608647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.497 [2024-11-20 21:05:13.608894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.497 [2024-11-20 21:05:13.608906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:55.497 [2024-11-20 21:05:13.608920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:22:55.497 [2024-11-20 21:05:13.608928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.497 [2024-11-20 21:05:13.612436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.497 [2024-11-20 21:05:13.612453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:55.497 [2024-11-20 21:05:13.612463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.491 ms 00:22:55.497 [2024-11-20 21:05:13.612472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.760 [2024-11-20 21:05:13.619373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.760 [2024-11-20 21:05:13.619421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:55.760 [2024-11-20 21:05:13.619432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.884 ms 00:22:55.760 [2024-11-20 21:05:13.619442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.760 [2024-11-20 21:05:13.622370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.760 [2024-11-20 21:05:13.622545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:55.760 [2024-11-20 21:05:13.622564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.861 ms 00:22:55.760 [2024-11-20 21:05:13.622572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.760 [2024-11-20 21:05:13.628956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.760 [2024-11-20 21:05:13.629007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:55.760 [2024-11-20 21:05:13.629020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.053 ms 00:22:55.760 [2024-11-20 21:05:13.629029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.760 [2024-11-20 21:05:13.629137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.760 [2024-11-20 21:05:13.629160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:55.760 [2024-11-20 21:05:13.629174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:55.760 [2024-11-20 21:05:13.629182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.760 [2024-11-20 21:05:13.632618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.760 [2024-11-20 21:05:13.632657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:55.760 [2024-11-20 21:05:13.632666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.413 ms 00:22:55.760 [2024-11-20 21:05:13.632674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.761 [2024-11-20 21:05:13.635493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.761 [2024-11-20 21:05:13.635529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:55.761 [2024-11-20 21:05:13.635539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:22:55.761 [2024-11-20 21:05:13.635545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.761 [2024-11-20 21:05:13.637730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.761 [2024-11-20 21:05:13.637936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:55.761 [2024-11-20 21:05:13.637962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.140 ms 00:22:55.761 [2024-11-20 21:05:13.637981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.761 [2024-11-20 21:05:13.640250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.761 [2024-11-20 21:05:13.640421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:55.761 [2024-11-20 21:05:13.640479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.151 ms 00:22:55.761 [2024-11-20 21:05:13.640501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.761 [2024-11-20 21:05:13.640545] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:55.761 [2024-11-20 21:05:13.640574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.640997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.641898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.642980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:55.761 [2024-11-20 21:05:13.643347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:55.762 [2024-11-20 21:05:13.643538] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:55.762 [2024-11-20 21:05:13.643547] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d39077d7-7682-4ed0-b438-a9982d49a5b0 00:22:55.762 [2024-11-20 21:05:13.643559] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:55.762 [2024-11-20 21:05:13.643567] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:55.762 [2024-11-20 21:05:13.643574] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:55.762 [2024-11-20 21:05:13.643582] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:55.762 [2024-11-20 21:05:13.643589] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:55.762 [2024-11-20 21:05:13.643597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:55.762 [2024-11-20 21:05:13.643605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:55.762 [2024-11-20 21:05:13.643611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:55.762 [2024-11-20 21:05:13.643617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:55.762 [2024-11-20 21:05:13.643628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.762 [2024-11-20 21:05:13.643638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:55.762 [2024-11-20 21:05:13.643655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.084 ms 00:22:55.762 [2024-11-20 21:05:13.643663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.646345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.762 [2024-11-20 21:05:13.646501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:55.762 [2024-11-20 21:05:13.646556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.649 ms 00:22:55.762 [2024-11-20 21:05:13.646579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.646724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.762 [2024-11-20 21:05:13.646839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:55.762 [2024-11-20 21:05:13.646916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:55.762 [2024-11-20 21:05:13.646939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.654395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.654551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:55.762 [2024-11-20 21:05:13.654606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.654617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.654693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.654706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:55.762 [2024-11-20 21:05:13.654715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.654723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.654826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.654838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:55.762 [2024-11-20 21:05:13.654847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.654862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.654879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.654892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:55.762 [2024-11-20 21:05:13.654900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.654911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.668498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.668661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:55.762 [2024-11-20 21:05:13.668727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.668773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.678800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.678964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:55.762 [2024-11-20 21:05:13.679016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.679038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.679138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.679163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.762 [2024-11-20 21:05:13.679183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.679202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.679315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.679341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.762 [2024-11-20 21:05:13.679361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.679385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.679474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.679497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.762 [2024-11-20 21:05:13.679517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.679581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.679640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.679667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:55.762 [2024-11-20 21:05:13.679688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.679712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.679780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.680233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.762 [2024-11-20 21:05:13.680330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.680359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.680435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.762 [2024-11-20 21:05:13.680446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.762 [2024-11-20 21:05:13.680455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.762 [2024-11-20 21:05:13.680472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.762 [2024-11-20 21:05:13.680614] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.108 ms, result 0 00:22:56.023 00:22:56.023 00:22:56.023 21:05:13 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:58.574 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:58.574 21:05:16 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:58.574 [2024-11-20 21:05:16.227838] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:22:58.574 [2024-11-20 21:05:16.227972] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89765 ] 00:22:58.575 [2024-11-20 21:05:16.375567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.575 [2024-11-20 21:05:16.404197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.575 [2024-11-20 21:05:16.513142] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.575 [2024-11-20 21:05:16.513213] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:58.575 [2024-11-20 21:05:16.675677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.575 [2024-11-20 21:05:16.675759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:58.575 [2024-11-20 21:05:16.675779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:58.575 [2024-11-20 21:05:16.675788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.575 [2024-11-20 21:05:16.675857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.575 [2024-11-20 21:05:16.675869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:58.575 [2024-11-20 21:05:16.675878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:22:58.575 [2024-11-20 21:05:16.675886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.575 [2024-11-20 21:05:16.675911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:58.575 [2024-11-20 21:05:16.676704] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:58.575 [2024-11-20 21:05:16.676774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.575 [2024-11-20 21:05:16.676786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:58.575 [2024-11-20 21:05:16.676796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:22:58.575 [2024-11-20 21:05:16.676808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.575 [2024-11-20 21:05:16.678528] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:58.575 [2024-11-20 21:05:16.682467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.575 [2024-11-20 21:05:16.682522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:58.575 [2024-11-20 21:05:16.682534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.942 ms 00:22:58.575 [2024-11-20 21:05:16.682543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.575 [2024-11-20 21:05:16.682632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.575 [2024-11-20 21:05:16.682650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:58.575 [2024-11-20 21:05:16.682659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:58.575 [2024-11-20 21:05:16.682667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.838 [2024-11-20 21:05:16.690792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.838 [2024-11-20 21:05:16.690834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:58.838 [2024-11-20 21:05:16.690845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.078 ms 00:22:58.838 [2024-11-20 21:05:16.690861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.838 [2024-11-20 21:05:16.690977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.838 [2024-11-20 21:05:16.690988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:58.838 [2024-11-20 21:05:16.690998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:22:58.838 [2024-11-20 21:05:16.691009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.838 [2024-11-20 21:05:16.691067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.838 [2024-11-20 21:05:16.691077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:58.838 [2024-11-20 21:05:16.691086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:58.838 [2024-11-20 21:05:16.691093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.838 [2024-11-20 21:05:16.691119] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:58.838 [2024-11-20 21:05:16.693192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.838 [2024-11-20 21:05:16.693235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:58.838 [2024-11-20 21:05:16.693246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.078 ms 00:22:58.838 [2024-11-20 21:05:16.693254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.838 [2024-11-20 21:05:16.693290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.838 [2024-11-20 21:05:16.693299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:58.838 [2024-11-20 21:05:16.693312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:58.838 [2024-11-20 21:05:16.693321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.838 [2024-11-20 21:05:16.693352] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:58.838 [2024-11-20 21:05:16.693374] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:58.838 [2024-11-20 21:05:16.693417] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:58.838 [2024-11-20 21:05:16.693437] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:58.838 [2024-11-20 21:05:16.693543] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:58.838 [2024-11-20 21:05:16.693559] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:58.838 [2024-11-20 21:05:16.693570] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:58.838 [2024-11-20 21:05:16.693583] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:58.838 [2024-11-20 21:05:16.693597] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:58.838 [2024-11-20 21:05:16.693605] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:58.838 [2024-11-20 21:05:16.693613] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:58.838 [2024-11-20 21:05:16.693621] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:58.838 [2024-11-20 21:05:16.693629] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:58.838 [2024-11-20 21:05:16.693638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.838 [2024-11-20 21:05:16.693646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:58.838 [2024-11-20 21:05:16.693657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:22:58.838 [2024-11-20 21:05:16.693670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.838 [2024-11-20 21:05:16.693769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.839 [2024-11-20 21:05:16.693783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:58.839 [2024-11-20 21:05:16.693791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:58.839 [2024-11-20 21:05:16.693799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.839 [2024-11-20 21:05:16.693907] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:58.839 [2024-11-20 21:05:16.693925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:58.839 [2024-11-20 21:05:16.693935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:58.839 [2024-11-20 21:05:16.693944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.839 [2024-11-20 21:05:16.693953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:58.839 [2024-11-20 21:05:16.693961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:58.839 [2024-11-20 21:05:16.693970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:58.839 [2024-11-20 21:05:16.693979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:58.839 [2024-11-20 21:05:16.693987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:58.839 [2024-11-20 21:05:16.693995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:58.839 [2024-11-20 21:05:16.694005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:58.839 [2024-11-20 21:05:16.694013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:58.839 [2024-11-20 21:05:16.694022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:58.839 [2024-11-20 21:05:16.694030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:58.839 [2024-11-20 21:05:16.694038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:58.839 [2024-11-20 21:05:16.694050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.839 [2024-11-20 21:05:16.694058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:58.839 [2024-11-20 21:05:16.694066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:58.839 [2024-11-20 21:05:16.694074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.839 [2024-11-20 21:05:16.694082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:58.839 [2024-11-20 21:05:16.694090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:58.839 [2024-11-20 21:05:16.694098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.839 [2024-11-20 21:05:16.694105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:58.839 [2024-11-20 21:05:16.694113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:58.839 [2024-11-20 21:05:16.694120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.839 [2024-11-20 21:05:16.694128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:58.839 [2024-11-20 21:05:16.694140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:58.839 [2024-11-20 21:05:16.694148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.839 [2024-11-20 21:05:16.694180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:58.839 [2024-11-20 21:05:16.694189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:58.839 [2024-11-20 21:05:16.694196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:58.839 [2024-11-20 21:05:16.694204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:58.839 [2024-11-20 21:05:16.694212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:58.839 [2024-11-20 21:05:16.694220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:58.839 [2024-11-20 21:05:16.694228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:58.839 [2024-11-20 21:05:16.694236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:58.839 [2024-11-20 21:05:16.694243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:58.839 [2024-11-20 21:05:16.694252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:58.839 [2024-11-20 21:05:16.694261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:58.839 [2024-11-20 21:05:16.694269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.839 [2024-11-20 21:05:16.694277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:58.839 [2024-11-20 21:05:16.694284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:58.839 [2024-11-20 21:05:16.694295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.839 [2024-11-20 21:05:16.694304] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:58.839 [2024-11-20 21:05:16.694313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:58.839 [2024-11-20 21:05:16.694325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:58.839 [2024-11-20 21:05:16.694333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:58.839 [2024-11-20 21:05:16.694343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:58.839 [2024-11-20 21:05:16.694350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:58.839 [2024-11-20 21:05:16.694357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:58.839 [2024-11-20 21:05:16.694365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:58.839 [2024-11-20 21:05:16.694372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:58.839 [2024-11-20 21:05:16.694380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:58.839 [2024-11-20 21:05:16.694388] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:58.839 [2024-11-20 21:05:16.694399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.839 [2024-11-20 21:05:16.694408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:58.839 [2024-11-20 21:05:16.694415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:58.839 [2024-11-20 21:05:16.694422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:58.839 [2024-11-20 21:05:16.694432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:58.839 [2024-11-20 21:05:16.694439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:58.839 [2024-11-20 21:05:16.694447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:58.839 [2024-11-20 21:05:16.694454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:58.839 [2024-11-20 21:05:16.694461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:58.839 [2024-11-20 21:05:16.694468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:58.839 [2024-11-20 21:05:16.694475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:58.839 [2024-11-20 21:05:16.694481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:58.839 [2024-11-20 21:05:16.694495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:58.839 [2024-11-20 21:05:16.694503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:58.839 [2024-11-20 21:05:16.694510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:58.839 [2024-11-20 21:05:16.694517] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:58.839 [2024-11-20 21:05:16.694526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:58.839 [2024-11-20 21:05:16.694538] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:58.839 [2024-11-20 21:05:16.694546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:58.839 [2024-11-20 21:05:16.694553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:58.839 [2024-11-20 21:05:16.694563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:58.839 [2024-11-20 21:05:16.694571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.839 [2024-11-20 21:05:16.694579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:58.839 [2024-11-20 21:05:16.694586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:22:58.839 [2024-11-20 21:05:16.694594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.839 [2024-11-20 21:05:16.708090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.839 [2024-11-20 21:05:16.708136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.839 [2024-11-20 21:05:16.708147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.441 ms 00:22:58.839 [2024-11-20 21:05:16.708155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.839 [2024-11-20 21:05:16.708250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.839 [2024-11-20 21:05:16.708259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:58.839 [2024-11-20 21:05:16.708268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:58.839 [2024-11-20 21:05:16.708275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.839 [2024-11-20 21:05:16.730025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.839 [2024-11-20 21:05:16.730080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.839 [2024-11-20 21:05:16.730094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.691 ms 00:22:58.839 [2024-11-20 21:05:16.730102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.839 [2024-11-20 21:05:16.730152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.839 [2024-11-20 21:05:16.730195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.839 [2024-11-20 21:05:16.730211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:58.839 [2024-11-20 21:05:16.730222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.839 [2024-11-20 21:05:16.730824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.839 [2024-11-20 21:05:16.730872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.840 [2024-11-20 21:05:16.730888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:22:58.840 [2024-11-20 21:05:16.730902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.731110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.731135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.840 [2024-11-20 21:05:16.731155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:22:58.840 [2024-11-20 21:05:16.731167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.739415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.739474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.840 [2024-11-20 21:05:16.739488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.220 ms 00:22:58.840 [2024-11-20 21:05:16.739499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.743383] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:58.840 [2024-11-20 21:05:16.743434] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:58.840 [2024-11-20 21:05:16.743447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.743455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:58.840 [2024-11-20 21:05:16.743464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.826 ms 00:22:58.840 [2024-11-20 21:05:16.743471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.759215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.759276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:58.840 [2024-11-20 21:05:16.759288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.686 ms 00:22:58.840 [2024-11-20 21:05:16.759296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.762070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.762118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:58.840 [2024-11-20 21:05:16.762128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.724 ms 00:22:58.840 [2024-11-20 21:05:16.762136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.764867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.764915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:58.840 [2024-11-20 21:05:16.764924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.672 ms 00:22:58.840 [2024-11-20 21:05:16.764932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.765288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.765310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:58.840 [2024-11-20 21:05:16.765322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:22:58.840 [2024-11-20 21:05:16.765330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.791304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.791357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:58.840 [2024-11-20 21:05:16.791369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.956 ms 00:22:58.840 [2024-11-20 21:05:16.791377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.799495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:58.840 [2024-11-20 21:05:16.802556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.802608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:58.840 [2024-11-20 21:05:16.802622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.132 ms 00:22:58.840 [2024-11-20 21:05:16.802631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.802706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.802717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:58.840 [2024-11-20 21:05:16.802726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:58.840 [2024-11-20 21:05:16.802735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.802829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.802840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:58.840 [2024-11-20 21:05:16.802852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:58.840 [2024-11-20 21:05:16.802860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.802890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.802899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:58.840 [2024-11-20 21:05:16.802908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:58.840 [2024-11-20 21:05:16.802918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.802954] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:58.840 [2024-11-20 21:05:16.802965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.802973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:58.840 [2024-11-20 21:05:16.802981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:58.840 [2024-11-20 21:05:16.802992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.807814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.807857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:58.840 [2024-11-20 21:05:16.807868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.801 ms 00:22:58.840 [2024-11-20 21:05:16.807876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.807958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.840 [2024-11-20 21:05:16.807977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:58.840 [2024-11-20 21:05:16.807987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:58.840 [2024-11-20 21:05:16.807995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.840 [2024-11-20 21:05:16.809221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.072 ms, result 0 00:22:59.783  [2024-11-20T21:05:18.848Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-20T21:05:20.258Z] Copying: 30/1024 [MB] (14 MBps) [2024-11-20T21:05:20.878Z] Copying: 48/1024 [MB] (17 MBps) [2024-11-20T21:05:22.264Z] Copying: 82/1024 [MB] (34 MBps) [2024-11-20T21:05:22.838Z] Copying: 116/1024 [MB] (34 MBps) [2024-11-20T21:05:24.223Z] Copying: 136/1024 [MB] (20 MBps) [2024-11-20T21:05:25.168Z] Copying: 152/1024 [MB] (15 MBps) [2024-11-20T21:05:26.113Z] Copying: 178/1024 [MB] (26 MBps) [2024-11-20T21:05:27.056Z] Copying: 212/1024 [MB] (33 MBps) [2024-11-20T21:05:27.997Z] Copying: 238/1024 [MB] (26 MBps) [2024-11-20T21:05:28.941Z] Copying: 260/1024 [MB] (21 MBps) [2024-11-20T21:05:29.883Z] Copying: 280/1024 [MB] (20 MBps) [2024-11-20T21:05:30.827Z] Copying: 301/1024 [MB] (21 MBps) [2024-11-20T21:05:32.213Z] Copying: 312/1024 [MB] (10 MBps) [2024-11-20T21:05:33.157Z] Copying: 331/1024 [MB] (19 MBps) [2024-11-20T21:05:34.101Z] Copying: 343/1024 [MB] (11 MBps) [2024-11-20T21:05:35.046Z] Copying: 379/1024 [MB] (35 MBps) [2024-11-20T21:05:35.990Z] Copying: 415/1024 [MB] (35 MBps) [2024-11-20T21:05:36.933Z] Copying: 452/1024 [MB] (37 MBps) [2024-11-20T21:05:37.877Z] Copying: 493/1024 [MB] (41 MBps) [2024-11-20T21:05:39.263Z] Copying: 525/1024 [MB] (31 MBps) [2024-11-20T21:05:39.834Z] Copying: 539/1024 [MB] (14 MBps) [2024-11-20T21:05:41.220Z] Copying: 562/1024 [MB] (22 MBps) [2024-11-20T21:05:42.164Z] Copying: 583/1024 [MB] (21 MBps) [2024-11-20T21:05:43.109Z] Copying: 598/1024 [MB] (14 MBps) [2024-11-20T21:05:44.054Z] Copying: 617/1024 [MB] (19 MBps) [2024-11-20T21:05:44.997Z] Copying: 630/1024 [MB] (12 MBps) [2024-11-20T21:05:45.942Z] Copying: 645/1024 [MB] (14 MBps) [2024-11-20T21:05:46.885Z] Copying: 671/1024 [MB] (26 MBps) [2024-11-20T21:05:47.830Z] Copying: 687/1024 [MB] (16 MBps) [2024-11-20T21:05:49.218Z] Copying: 706/1024 [MB] (18 MBps) [2024-11-20T21:05:49.863Z] Copying: 719/1024 [MB] (13 MBps) [2024-11-20T21:05:51.250Z] Copying: 754/1024 [MB] (35 MBps) [2024-11-20T21:05:52.194Z] Copying: 778/1024 [MB] (23 MBps) [2024-11-20T21:05:53.139Z] Copying: 818/1024 [MB] (40 MBps) [2024-11-20T21:05:54.083Z] Copying: 845/1024 [MB] (26 MBps) [2024-11-20T21:05:55.028Z] Copying: 862/1024 [MB] (17 MBps) [2024-11-20T21:05:55.972Z] Copying: 876/1024 [MB] (13 MBps) [2024-11-20T21:05:56.916Z] Copying: 916/1024 [MB] (40 MBps) [2024-11-20T21:05:57.860Z] Copying: 944/1024 [MB] (28 MBps) [2024-11-20T21:05:59.248Z] Copying: 981/1024 [MB] (37 MBps) [2024-11-20T21:06:00.190Z] Copying: 995/1024 [MB] (14 MBps) [2024-11-20T21:06:01.134Z] Copying: 1014/1024 [MB] (18 MBps) [2024-11-20T21:06:01.396Z] Copying: 1048128/1048576 [kB] (9284 kBps) [2024-11-20T21:06:01.396Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-20 21:06:01.226398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.277 [2024-11-20 21:06:01.226478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:43.277 [2024-11-20 21:06:01.226495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:43.277 [2024-11-20 21:06:01.226505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.277 [2024-11-20 21:06:01.227944] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:43.277 [2024-11-20 21:06:01.229263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.277 [2024-11-20 21:06:01.229316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:43.277 [2024-11-20 21:06:01.229328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:23:43.277 [2024-11-20 21:06:01.229349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.277 [2024-11-20 21:06:01.245153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.277 [2024-11-20 21:06:01.245208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:43.277 [2024-11-20 21:06:01.245233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.506 ms 00:23:43.277 [2024-11-20 21:06:01.245241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.277 [2024-11-20 21:06:01.269024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.277 [2024-11-20 21:06:01.269076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:43.277 [2024-11-20 21:06:01.269088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.764 ms 00:23:43.277 [2024-11-20 21:06:01.269097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.277 [2024-11-20 21:06:01.275610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.277 [2024-11-20 21:06:01.275672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:43.277 [2024-11-20 21:06:01.275685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.465 ms 00:23:43.277 [2024-11-20 21:06:01.275702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.277 [2024-11-20 21:06:01.278776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.277 [2024-11-20 21:06:01.278826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:43.277 [2024-11-20 21:06:01.278837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.028 ms 00:23:43.277 [2024-11-20 21:06:01.278844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.277 [2024-11-20 21:06:01.284086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.277 [2024-11-20 21:06:01.284141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:43.277 [2024-11-20 21:06:01.284153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.198 ms 00:23:43.277 [2024-11-20 21:06:01.284161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.540 [2024-11-20 21:06:01.583937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.540 [2024-11-20 21:06:01.584005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:43.540 [2024-11-20 21:06:01.584020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 299.721 ms 00:23:43.540 [2024-11-20 21:06:01.584029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.540 [2024-11-20 21:06:01.587634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.540 [2024-11-20 21:06:01.587685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:43.540 [2024-11-20 21:06:01.587696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.587 ms 00:23:43.540 [2024-11-20 21:06:01.587704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.540 [2024-11-20 21:06:01.590718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.540 [2024-11-20 21:06:01.590781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:43.540 [2024-11-20 21:06:01.590793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.970 ms 00:23:43.540 [2024-11-20 21:06:01.590800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.540 [2024-11-20 21:06:01.593282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.540 [2024-11-20 21:06:01.593328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:43.540 [2024-11-20 21:06:01.593339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.440 ms 00:23:43.540 [2024-11-20 21:06:01.593346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.540 [2024-11-20 21:06:01.595727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.540 [2024-11-20 21:06:01.595786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:43.540 [2024-11-20 21:06:01.595798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.311 ms 00:23:43.540 [2024-11-20 21:06:01.595804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.540 [2024-11-20 21:06:01.595843] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:43.540 [2024-11-20 21:06:01.595857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103680 / 261120 wr_cnt: 1 state: open 00:23:43.540 [2024-11-20 21:06:01.595869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.595995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:43.540 [2024-11-20 21:06:01.596234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:43.541 [2024-11-20 21:06:01.596675] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:43.541 [2024-11-20 21:06:01.596683] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d39077d7-7682-4ed0-b438-a9982d49a5b0 00:23:43.541 [2024-11-20 21:06:01.596692] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103680 00:23:43.541 [2024-11-20 21:06:01.596700] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104640 00:23:43.541 [2024-11-20 21:06:01.596710] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103680 00:23:43.541 [2024-11-20 21:06:01.596723] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:23:43.541 [2024-11-20 21:06:01.596731] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:43.541 [2024-11-20 21:06:01.596740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:43.541 [2024-11-20 21:06:01.596761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:43.541 [2024-11-20 21:06:01.596768] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:43.541 [2024-11-20 21:06:01.596776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:43.541 [2024-11-20 21:06:01.596784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.541 [2024-11-20 21:06:01.596792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:43.541 [2024-11-20 21:06:01.596801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:23:43.541 [2024-11-20 21:06:01.596810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.541 [2024-11-20 21:06:01.599244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.541 [2024-11-20 21:06:01.599289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:43.541 [2024-11-20 21:06:01.599300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.410 ms 00:23:43.541 [2024-11-20 21:06:01.599308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.541 [2024-11-20 21:06:01.599425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.541 [2024-11-20 21:06:01.599434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:43.541 [2024-11-20 21:06:01.599443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:43.541 [2024-11-20 21:06:01.599451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.541 [2024-11-20 21:06:01.606888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.541 [2024-11-20 21:06:01.606940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:43.541 [2024-11-20 21:06:01.606951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.541 [2024-11-20 21:06:01.606959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.541 [2024-11-20 21:06:01.607036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.541 [2024-11-20 21:06:01.607046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:43.541 [2024-11-20 21:06:01.607054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.541 [2024-11-20 21:06:01.607061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.541 [2024-11-20 21:06:01.607115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.541 [2024-11-20 21:06:01.607126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:43.541 [2024-11-20 21:06:01.607135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.541 [2024-11-20 21:06:01.607143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.541 [2024-11-20 21:06:01.607158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.541 [2024-11-20 21:06:01.607167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:43.541 [2024-11-20 21:06:01.607175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.541 [2024-11-20 21:06:01.607183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.541 [2024-11-20 21:06:01.620831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.541 [2024-11-20 21:06:01.620886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:43.541 [2024-11-20 21:06:01.620897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.541 [2024-11-20 21:06:01.620905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.542 [2024-11-20 21:06:01.631102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.542 [2024-11-20 21:06:01.631152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:43.542 [2024-11-20 21:06:01.631163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.542 [2024-11-20 21:06:01.631171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.542 [2024-11-20 21:06:01.631221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.542 [2024-11-20 21:06:01.631237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:43.542 [2024-11-20 21:06:01.631246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.542 [2024-11-20 21:06:01.631254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.542 [2024-11-20 21:06:01.631289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.542 [2024-11-20 21:06:01.631298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:43.542 [2024-11-20 21:06:01.631313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.542 [2024-11-20 21:06:01.631321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.542 [2024-11-20 21:06:01.631388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.542 [2024-11-20 21:06:01.631399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:43.542 [2024-11-20 21:06:01.631410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.542 [2024-11-20 21:06:01.631417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.542 [2024-11-20 21:06:01.631451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.542 [2024-11-20 21:06:01.631460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:43.542 [2024-11-20 21:06:01.631472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.542 [2024-11-20 21:06:01.631480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.542 [2024-11-20 21:06:01.631520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.542 [2024-11-20 21:06:01.631529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:43.542 [2024-11-20 21:06:01.631541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.542 [2024-11-20 21:06:01.631548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.542 [2024-11-20 21:06:01.631592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.542 [2024-11-20 21:06:01.631601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:43.542 [2024-11-20 21:06:01.631610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.542 [2024-11-20 21:06:01.631619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.542 [2024-11-20 21:06:01.631774] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 407.766 ms, result 0 00:23:43.803 00:23:43.803 00:23:43.803 21:06:01 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:43.803 [2024-11-20 21:06:01.896689] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:23:43.803 [2024-11-20 21:06:01.897068] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90237 ] 00:23:44.063 [2024-11-20 21:06:02.044484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.063 [2024-11-20 21:06:02.073143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.326 [2024-11-20 21:06:02.188868] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:44.326 [2024-11-20 21:06:02.188952] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:44.326 [2024-11-20 21:06:02.351096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.351156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:44.326 [2024-11-20 21:06:02.351171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:44.326 [2024-11-20 21:06:02.351179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.351239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.351250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:44.326 [2024-11-20 21:06:02.351260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:44.326 [2024-11-20 21:06:02.351268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.351292] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:44.326 [2024-11-20 21:06:02.351956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:44.326 [2024-11-20 21:06:02.352010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.352023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:44.326 [2024-11-20 21:06:02.352047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.726 ms 00:23:44.326 [2024-11-20 21:06:02.352058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.354563] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:44.326 [2024-11-20 21:06:02.358178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.358232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:44.326 [2024-11-20 21:06:02.358246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:23:44.326 [2024-11-20 21:06:02.358254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.358337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.358349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:44.326 [2024-11-20 21:06:02.358364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:44.326 [2024-11-20 21:06:02.358373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.366551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.366596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:44.326 [2024-11-20 21:06:02.366606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.123 ms 00:23:44.326 [2024-11-20 21:06:02.366618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.366725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.366735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:44.326 [2024-11-20 21:06:02.366779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:44.326 [2024-11-20 21:06:02.366789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.366850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.366861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:44.326 [2024-11-20 21:06:02.366870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:44.326 [2024-11-20 21:06:02.366879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.366906] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:44.326 [2024-11-20 21:06:02.368977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.369020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:44.326 [2024-11-20 21:06:02.369030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:23:44.326 [2024-11-20 21:06:02.369038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.369072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.369081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:44.326 [2024-11-20 21:06:02.369090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:44.326 [2024-11-20 21:06:02.369098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.369127] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:44.326 [2024-11-20 21:06:02.369148] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:44.326 [2024-11-20 21:06:02.369190] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:44.326 [2024-11-20 21:06:02.369208] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:44.326 [2024-11-20 21:06:02.369316] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:44.326 [2024-11-20 21:06:02.369329] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:44.326 [2024-11-20 21:06:02.369345] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:44.326 [2024-11-20 21:06:02.369359] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:44.326 [2024-11-20 21:06:02.369372] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:44.326 [2024-11-20 21:06:02.369384] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:44.326 [2024-11-20 21:06:02.369396] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:44.326 [2024-11-20 21:06:02.369405] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:44.326 [2024-11-20 21:06:02.369415] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:44.326 [2024-11-20 21:06:02.369424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.369433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:44.326 [2024-11-20 21:06:02.369441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:23:44.326 [2024-11-20 21:06:02.369449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.369531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.326 [2024-11-20 21:06:02.369544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:44.326 [2024-11-20 21:06:02.369552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:44.326 [2024-11-20 21:06:02.369559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.326 [2024-11-20 21:06:02.369665] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:44.326 [2024-11-20 21:06:02.369682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:44.326 [2024-11-20 21:06:02.369691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:44.326 [2024-11-20 21:06:02.369709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.326 [2024-11-20 21:06:02.369719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:44.326 [2024-11-20 21:06:02.369727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:44.326 [2024-11-20 21:06:02.369735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:44.326 [2024-11-20 21:06:02.369762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:44.326 [2024-11-20 21:06:02.369771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:44.326 [2024-11-20 21:06:02.369780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:44.326 [2024-11-20 21:06:02.369789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:44.326 [2024-11-20 21:06:02.369798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:44.326 [2024-11-20 21:06:02.369807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:44.326 [2024-11-20 21:06:02.369817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:44.326 [2024-11-20 21:06:02.369825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:44.327 [2024-11-20 21:06:02.369833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.327 [2024-11-20 21:06:02.369841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:44.327 [2024-11-20 21:06:02.369849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:44.327 [2024-11-20 21:06:02.369858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.327 [2024-11-20 21:06:02.369869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:44.327 [2024-11-20 21:06:02.369877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:44.327 [2024-11-20 21:06:02.369885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.327 [2024-11-20 21:06:02.369892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:44.327 [2024-11-20 21:06:02.369901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:44.327 [2024-11-20 21:06:02.369908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.327 [2024-11-20 21:06:02.369916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:44.327 [2024-11-20 21:06:02.369925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:44.327 [2024-11-20 21:06:02.369933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.327 [2024-11-20 21:06:02.369941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:44.327 [2024-11-20 21:06:02.369949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:44.327 [2024-11-20 21:06:02.369956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:44.327 [2024-11-20 21:06:02.369964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:44.327 [2024-11-20 21:06:02.369971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:44.327 [2024-11-20 21:06:02.369978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:44.327 [2024-11-20 21:06:02.369988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:44.327 [2024-11-20 21:06:02.369998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:44.327 [2024-11-20 21:06:02.370006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:44.327 [2024-11-20 21:06:02.370014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:44.327 [2024-11-20 21:06:02.370022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:44.327 [2024-11-20 21:06:02.370029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.327 [2024-11-20 21:06:02.370038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:44.327 [2024-11-20 21:06:02.370045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:44.327 [2024-11-20 21:06:02.370052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.327 [2024-11-20 21:06:02.370058] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:44.327 [2024-11-20 21:06:02.370067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:44.327 [2024-11-20 21:06:02.370083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:44.327 [2024-11-20 21:06:02.370095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:44.327 [2024-11-20 21:06:02.370103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:44.327 [2024-11-20 21:06:02.370111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:44.327 [2024-11-20 21:06:02.370130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:44.327 [2024-11-20 21:06:02.370137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:44.327 [2024-11-20 21:06:02.370146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:44.327 [2024-11-20 21:06:02.370153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:44.327 [2024-11-20 21:06:02.370161] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:44.327 [2024-11-20 21:06:02.370171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.327 [2024-11-20 21:06:02.370185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:44.327 [2024-11-20 21:06:02.370193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:44.327 [2024-11-20 21:06:02.370200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:44.327 [2024-11-20 21:06:02.370207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:44.327 [2024-11-20 21:06:02.370215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:44.327 [2024-11-20 21:06:02.370223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:44.327 [2024-11-20 21:06:02.370230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:44.327 [2024-11-20 21:06:02.370237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:44.327 [2024-11-20 21:06:02.370245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:44.327 [2024-11-20 21:06:02.370253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:44.327 [2024-11-20 21:06:02.370261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:44.327 [2024-11-20 21:06:02.370274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:44.327 [2024-11-20 21:06:02.370283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:44.327 [2024-11-20 21:06:02.370290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:44.327 [2024-11-20 21:06:02.370298] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:44.327 [2024-11-20 21:06:02.370307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:44.327 [2024-11-20 21:06:02.370315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:44.327 [2024-11-20 21:06:02.370323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:44.327 [2024-11-20 21:06:02.370330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:44.327 [2024-11-20 21:06:02.370338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:44.327 [2024-11-20 21:06:02.370346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.327 [2024-11-20 21:06:02.370354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:44.327 [2024-11-20 21:06:02.370366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:23:44.327 [2024-11-20 21:06:02.370375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.327 [2024-11-20 21:06:02.384493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.327 [2024-11-20 21:06:02.384543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:44.327 [2024-11-20 21:06:02.384554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.069 ms 00:23:44.327 [2024-11-20 21:06:02.384562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.327 [2024-11-20 21:06:02.384650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.327 [2024-11-20 21:06:02.384658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:44.327 [2024-11-20 21:06:02.384667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:44.327 [2024-11-20 21:06:02.384682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.327 [2024-11-20 21:06:02.416131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.327 [2024-11-20 21:06:02.416233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:44.327 [2024-11-20 21:06:02.416265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.377 ms 00:23:44.327 [2024-11-20 21:06:02.416287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.327 [2024-11-20 21:06:02.416386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.327 [2024-11-20 21:06:02.416412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:44.327 [2024-11-20 21:06:02.416436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:44.327 [2024-11-20 21:06:02.416471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.327 [2024-11-20 21:06:02.417275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.327 [2024-11-20 21:06:02.417364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:44.327 [2024-11-20 21:06:02.417400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:23:44.327 [2024-11-20 21:06:02.417420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.327 [2024-11-20 21:06:02.417774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.327 [2024-11-20 21:06:02.417815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:44.327 [2024-11-20 21:06:02.417844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:23:44.327 [2024-11-20 21:06:02.417866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.327 [2024-11-20 21:06:02.426228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.327 [2024-11-20 21:06:02.426280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:44.327 [2024-11-20 21:06:02.426297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.313 ms 00:23:44.327 [2024-11-20 21:06:02.426305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.327 [2024-11-20 21:06:02.430253] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:44.327 [2024-11-20 21:06:02.430306] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:44.327 [2024-11-20 21:06:02.430319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.327 [2024-11-20 21:06:02.430328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:44.327 [2024-11-20 21:06:02.430338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.905 ms 00:23:44.327 [2024-11-20 21:06:02.430346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.446406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.446467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:44.588 [2024-11-20 21:06:02.446480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.006 ms 00:23:44.588 [2024-11-20 21:06:02.446488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.449405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.449456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:44.588 [2024-11-20 21:06:02.449474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.861 ms 00:23:44.588 [2024-11-20 21:06:02.449482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.452133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.452183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:44.588 [2024-11-20 21:06:02.452194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:23:44.588 [2024-11-20 21:06:02.452202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.452553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.452573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:44.588 [2024-11-20 21:06:02.452584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:23:44.588 [2024-11-20 21:06:02.452598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.477327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.477395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:44.588 [2024-11-20 21:06:02.477408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.704 ms 00:23:44.588 [2024-11-20 21:06:02.477417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.485737] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:44.588 [2024-11-20 21:06:02.488804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.488856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:44.588 [2024-11-20 21:06:02.488868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.330 ms 00:23:44.588 [2024-11-20 21:06:02.488881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.488963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.488973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:44.588 [2024-11-20 21:06:02.488984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:44.588 [2024-11-20 21:06:02.488993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.490763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.490817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:44.588 [2024-11-20 21:06:02.490831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.727 ms 00:23:44.588 [2024-11-20 21:06:02.490840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.490874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.490883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:44.588 [2024-11-20 21:06:02.490896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:44.588 [2024-11-20 21:06:02.490905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.490946] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:44.588 [2024-11-20 21:06:02.490956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.490965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:44.588 [2024-11-20 21:06:02.490974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:44.588 [2024-11-20 21:06:02.490986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.496542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.496595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:44.588 [2024-11-20 21:06:02.496608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.536 ms 00:23:44.588 [2024-11-20 21:06:02.496617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.496705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:44.588 [2024-11-20 21:06:02.496719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:44.588 [2024-11-20 21:06:02.496730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:44.588 [2024-11-20 21:06:02.496763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:44.588 [2024-11-20 21:06:02.498139] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 146.345 ms, result 0 00:23:45.974  [2024-11-20T21:06:05.037Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-20T21:06:05.979Z] Copying: 25/1024 [MB] (10 MBps) [2024-11-20T21:06:06.922Z] Copying: 36/1024 [MB] (10 MBps) [2024-11-20T21:06:07.866Z] Copying: 49/1024 [MB] (12 MBps) [2024-11-20T21:06:08.812Z] Copying: 59/1024 [MB] (10 MBps) [2024-11-20T21:06:09.763Z] Copying: 70/1024 [MB] (10 MBps) [2024-11-20T21:06:10.708Z] Copying: 93/1024 [MB] (22 MBps) [2024-11-20T21:06:12.097Z] Copying: 105/1024 [MB] (12 MBps) [2024-11-20T21:06:13.043Z] Copying: 119/1024 [MB] (14 MBps) [2024-11-20T21:06:13.988Z] Copying: 135/1024 [MB] (16 MBps) [2024-11-20T21:06:14.933Z] Copying: 155/1024 [MB] (19 MBps) [2024-11-20T21:06:15.878Z] Copying: 171/1024 [MB] (16 MBps) [2024-11-20T21:06:16.824Z] Copying: 186/1024 [MB] (14 MBps) [2024-11-20T21:06:17.811Z] Copying: 208/1024 [MB] (22 MBps) [2024-11-20T21:06:18.774Z] Copying: 223/1024 [MB] (14 MBps) [2024-11-20T21:06:19.729Z] Copying: 241/1024 [MB] (18 MBps) [2024-11-20T21:06:21.115Z] Copying: 257/1024 [MB] (15 MBps) [2024-11-20T21:06:21.687Z] Copying: 275/1024 [MB] (17 MBps) [2024-11-20T21:06:23.074Z] Copying: 300/1024 [MB] (25 MBps) [2024-11-20T21:06:24.017Z] Copying: 313/1024 [MB] (13 MBps) [2024-11-20T21:06:24.960Z] Copying: 331/1024 [MB] (17 MBps) [2024-11-20T21:06:25.904Z] Copying: 350/1024 [MB] (19 MBps) [2024-11-20T21:06:26.845Z] Copying: 362/1024 [MB] (11 MBps) [2024-11-20T21:06:27.791Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-20T21:06:28.734Z] Copying: 384/1024 [MB] (10 MBps) [2024-11-20T21:06:29.691Z] Copying: 399/1024 [MB] (15 MBps) [2024-11-20T21:06:31.078Z] Copying: 413/1024 [MB] (13 MBps) [2024-11-20T21:06:32.023Z] Copying: 432/1024 [MB] (18 MBps) [2024-11-20T21:06:32.966Z] Copying: 446/1024 [MB] (14 MBps) [2024-11-20T21:06:33.911Z] Copying: 461/1024 [MB] (15 MBps) [2024-11-20T21:06:34.857Z] Copying: 473/1024 [MB] (11 MBps) [2024-11-20T21:06:35.802Z] Copying: 496/1024 [MB] (22 MBps) [2024-11-20T21:06:36.745Z] Copying: 519/1024 [MB] (22 MBps) [2024-11-20T21:06:37.705Z] Copying: 546/1024 [MB] (27 MBps) [2024-11-20T21:06:39.091Z] Copying: 568/1024 [MB] (21 MBps) [2024-11-20T21:06:40.040Z] Copying: 580/1024 [MB] (12 MBps) [2024-11-20T21:06:40.985Z] Copying: 592/1024 [MB] (12 MBps) [2024-11-20T21:06:41.931Z] Copying: 611/1024 [MB] (18 MBps) [2024-11-20T21:06:42.876Z] Copying: 632/1024 [MB] (21 MBps) [2024-11-20T21:06:43.821Z] Copying: 645/1024 [MB] (12 MBps) [2024-11-20T21:06:44.765Z] Copying: 670/1024 [MB] (25 MBps) [2024-11-20T21:06:45.709Z] Copying: 685/1024 [MB] (14 MBps) [2024-11-20T21:06:47.159Z] Copying: 698/1024 [MB] (13 MBps) [2024-11-20T21:06:47.772Z] Copying: 710/1024 [MB] (12 MBps) [2024-11-20T21:06:48.718Z] Copying: 729/1024 [MB] (18 MBps) [2024-11-20T21:06:50.104Z] Copying: 744/1024 [MB] (14 MBps) [2024-11-20T21:06:51.048Z] Copying: 764/1024 [MB] (20 MBps) [2024-11-20T21:06:51.992Z] Copying: 788/1024 [MB] (23 MBps) [2024-11-20T21:06:52.936Z] Copying: 798/1024 [MB] (10 MBps) [2024-11-20T21:06:53.880Z] Copying: 809/1024 [MB] (10 MBps) [2024-11-20T21:06:54.823Z] Copying: 820/1024 [MB] (10 MBps) [2024-11-20T21:06:55.765Z] Copying: 831/1024 [MB] (11 MBps) [2024-11-20T21:06:56.709Z] Copying: 842/1024 [MB] (11 MBps) [2024-11-20T21:06:58.098Z] Copying: 852/1024 [MB] (10 MBps) [2024-11-20T21:06:59.041Z] Copying: 863/1024 [MB] (10 MBps) [2024-11-20T21:06:59.985Z] Copying: 878/1024 [MB] (14 MBps) [2024-11-20T21:07:00.930Z] Copying: 888/1024 [MB] (10 MBps) [2024-11-20T21:07:01.874Z] Copying: 905/1024 [MB] (16 MBps) [2024-11-20T21:07:02.817Z] Copying: 916/1024 [MB] (11 MBps) [2024-11-20T21:07:03.763Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-20T21:07:04.708Z] Copying: 943/1024 [MB] (15 MBps) [2024-11-20T21:07:06.097Z] Copying: 958/1024 [MB] (15 MBps) [2024-11-20T21:07:07.041Z] Copying: 973/1024 [MB] (15 MBps) [2024-11-20T21:07:07.984Z] Copying: 988/1024 [MB] (15 MBps) [2024-11-20T21:07:08.929Z] Copying: 1006/1024 [MB] (17 MBps) [2024-11-20T21:07:08.929Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-20 21:07:08.618635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.810 [2024-11-20 21:07:08.618694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:50.810 [2024-11-20 21:07:08.618707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:50.810 [2024-11-20 21:07:08.618715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.810 [2024-11-20 21:07:08.618731] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:50.810 [2024-11-20 21:07:08.619247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.810 [2024-11-20 21:07:08.619276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:50.810 [2024-11-20 21:07:08.619286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:24:50.810 [2024-11-20 21:07:08.619300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.810 [2024-11-20 21:07:08.619471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.810 [2024-11-20 21:07:08.619488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:50.810 [2024-11-20 21:07:08.619496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:24:50.810 [2024-11-20 21:07:08.619509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.810 [2024-11-20 21:07:08.623755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.810 [2024-11-20 21:07:08.623791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:50.810 [2024-11-20 21:07:08.623800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:24:50.810 [2024-11-20 21:07:08.623807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.810 [2024-11-20 21:07:08.628983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.810 [2024-11-20 21:07:08.629022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:50.810 [2024-11-20 21:07:08.629031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.150 ms 00:24:50.810 [2024-11-20 21:07:08.629036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.810 [2024-11-20 21:07:08.630370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.811 [2024-11-20 21:07:08.630404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:50.811 [2024-11-20 21:07:08.630412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:24:50.811 [2024-11-20 21:07:08.630417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.811 [2024-11-20 21:07:08.633760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.811 [2024-11-20 21:07:08.633792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:50.811 [2024-11-20 21:07:08.633800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 00:24:50.811 [2024-11-20 21:07:08.633805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.811 [2024-11-20 21:07:08.735119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.811 [2024-11-20 21:07:08.735157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:50.811 [2024-11-20 21:07:08.735167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.279 ms 00:24:50.811 [2024-11-20 21:07:08.735173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.811 [2024-11-20 21:07:08.738042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.811 [2024-11-20 21:07:08.738205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:50.811 [2024-11-20 21:07:08.738233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:24:50.811 [2024-11-20 21:07:08.738255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.811 [2024-11-20 21:07:08.740260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.811 [2024-11-20 21:07:08.740329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:50.811 [2024-11-20 21:07:08.740352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.895 ms 00:24:50.811 [2024-11-20 21:07:08.740370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.811 [2024-11-20 21:07:08.742949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.811 [2024-11-20 21:07:08.743019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:50.811 [2024-11-20 21:07:08.743042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.452 ms 00:24:50.811 [2024-11-20 21:07:08.743060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.811 [2024-11-20 21:07:08.745788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.811 [2024-11-20 21:07:08.745884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:50.811 [2024-11-20 21:07:08.745915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:24:50.811 [2024-11-20 21:07:08.745934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.811 [2024-11-20 21:07:08.746005] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:50.811 [2024-11-20 21:07:08.746040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:50.811 [2024-11-20 21:07:08.746090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.746982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:50.811 [2024-11-20 21:07:08.747513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.747990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:50.812 [2024-11-20 21:07:08.748290] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:50.812 [2024-11-20 21:07:08.748319] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d39077d7-7682-4ed0-b438-a9982d49a5b0 00:24:50.812 [2024-11-20 21:07:08.748340] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:50.812 [2024-11-20 21:07:08.748367] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 28352 00:24:50.812 [2024-11-20 21:07:08.748394] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 27392 00:24:50.812 [2024-11-20 21:07:08.748431] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0350 00:24:50.812 [2024-11-20 21:07:08.748450] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:50.812 [2024-11-20 21:07:08.748470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:50.812 [2024-11-20 21:07:08.748490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:50.812 [2024-11-20 21:07:08.748516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:50.812 [2024-11-20 21:07:08.748534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:50.812 [2024-11-20 21:07:08.748562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.812 [2024-11-20 21:07:08.748583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:50.812 [2024-11-20 21:07:08.748604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.558 ms 00:24:50.812 [2024-11-20 21:07:08.748623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.750770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.812 [2024-11-20 21:07:08.750800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:50.812 [2024-11-20 21:07:08.750810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.110 ms 00:24:50.812 [2024-11-20 21:07:08.750818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.750892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.812 [2024-11-20 21:07:08.750901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:50.812 [2024-11-20 21:07:08.750911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:50.812 [2024-11-20 21:07:08.750924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.755521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.755553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.812 [2024-11-20 21:07:08.755562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.755569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.755616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.755628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.812 [2024-11-20 21:07:08.755636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.755643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.755680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.755690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.812 [2024-11-20 21:07:08.755697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.755704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.755718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.755725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.812 [2024-11-20 21:07:08.755732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.755739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.764025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.764065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.812 [2024-11-20 21:07:08.764075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.764083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.770823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.770860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.812 [2024-11-20 21:07:08.770869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.770876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.770927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.770940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.812 [2024-11-20 21:07:08.770948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.770955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.770979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.770986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.812 [2024-11-20 21:07:08.770994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.771001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.771059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.771069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.812 [2024-11-20 21:07:08.771082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.771089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.771114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.771122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:50.812 [2024-11-20 21:07:08.771130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.771136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.812 [2024-11-20 21:07:08.771168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.812 [2024-11-20 21:07:08.771177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.812 [2024-11-20 21:07:08.771188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.812 [2024-11-20 21:07:08.771198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.813 [2024-11-20 21:07:08.771235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.813 [2024-11-20 21:07:08.771245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.813 [2024-11-20 21:07:08.771253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.813 [2024-11-20 21:07:08.771260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.813 [2024-11-20 21:07:08.771372] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 152.704 ms, result 0 00:24:51.073 00:24:51.073 00:24:51.073 21:07:09 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:53.621 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:53.621 Process with pid 88159 is not found 00:24:53.621 Remove shared memory files 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88159 00:24:53.621 21:07:11 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88159 ']' 00:24:53.621 21:07:11 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88159 00:24:53.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88159) - No such process 00:24:53.621 21:07:11 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88159 is not found' 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:53.621 21:07:11 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:53.621 ************************************ 00:24:53.621 END TEST ftl_restore 00:24:53.621 ************************************ 00:24:53.621 00:24:53.621 real 4m28.030s 00:24:53.621 user 4m16.364s 00:24:53.621 sys 0m11.799s 00:24:53.621 21:07:11 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:53.621 21:07:11 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:53.621 21:07:11 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:53.621 21:07:11 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:53.621 21:07:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.621 21:07:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:53.621 ************************************ 00:24:53.621 START TEST ftl_dirty_shutdown 00:24:53.621 ************************************ 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:53.621 * Looking for test storage... 00:24:53.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:53.621 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:53.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.622 --rc genhtml_branch_coverage=1 00:24:53.622 --rc genhtml_function_coverage=1 00:24:53.622 --rc genhtml_legend=1 00:24:53.622 --rc geninfo_all_blocks=1 00:24:53.622 --rc geninfo_unexecuted_blocks=1 00:24:53.622 00:24:53.622 ' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:53.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.622 --rc genhtml_branch_coverage=1 00:24:53.622 --rc genhtml_function_coverage=1 00:24:53.622 --rc genhtml_legend=1 00:24:53.622 --rc geninfo_all_blocks=1 00:24:53.622 --rc geninfo_unexecuted_blocks=1 00:24:53.622 00:24:53.622 ' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:53.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.622 --rc genhtml_branch_coverage=1 00:24:53.622 --rc genhtml_function_coverage=1 00:24:53.622 --rc genhtml_legend=1 00:24:53.622 --rc geninfo_all_blocks=1 00:24:53.622 --rc geninfo_unexecuted_blocks=1 00:24:53.622 00:24:53.622 ' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:53.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.622 --rc genhtml_branch_coverage=1 00:24:53.622 --rc genhtml_function_coverage=1 00:24:53.622 --rc genhtml_legend=1 00:24:53.622 --rc geninfo_all_blocks=1 00:24:53.622 --rc geninfo_unexecuted_blocks=1 00:24:53.622 00:24:53.622 ' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91013 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91013 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91013 ']' 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.622 21:07:11 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:53.883 [2024-11-20 21:07:11.790540] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:24:53.883 [2024-11-20 21:07:11.790910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91013 ] 00:24:53.883 [2024-11-20 21:07:11.936888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.883 [2024-11-20 21:07:11.965443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.827 21:07:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.827 21:07:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:54.827 21:07:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:54.827 21:07:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:54.827 21:07:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:54.827 21:07:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:54.827 21:07:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:54.827 21:07:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:55.088 21:07:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:55.088 21:07:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:55.088 21:07:12 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:55.088 21:07:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:55.088 21:07:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:55.088 21:07:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:55.088 21:07:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:55.088 21:07:12 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:55.088 21:07:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:55.088 { 00:24:55.088 "name": "nvme0n1", 00:24:55.088 "aliases": [ 00:24:55.088 "410516f0-d638-4c7a-a41e-da1cd903a604" 00:24:55.088 ], 00:24:55.088 "product_name": "NVMe disk", 00:24:55.088 "block_size": 4096, 00:24:55.088 "num_blocks": 1310720, 00:24:55.088 "uuid": "410516f0-d638-4c7a-a41e-da1cd903a604", 00:24:55.088 "numa_id": -1, 00:24:55.088 "assigned_rate_limits": { 00:24:55.088 "rw_ios_per_sec": 0, 00:24:55.088 "rw_mbytes_per_sec": 0, 00:24:55.088 "r_mbytes_per_sec": 0, 00:24:55.088 "w_mbytes_per_sec": 0 00:24:55.088 }, 00:24:55.088 "claimed": true, 00:24:55.088 "claim_type": "read_many_write_one", 00:24:55.088 "zoned": false, 00:24:55.088 "supported_io_types": { 00:24:55.088 "read": true, 00:24:55.088 "write": true, 00:24:55.088 "unmap": true, 00:24:55.088 "flush": true, 00:24:55.088 "reset": true, 00:24:55.088 "nvme_admin": true, 00:24:55.088 "nvme_io": true, 00:24:55.088 "nvme_io_md": false, 00:24:55.088 "write_zeroes": true, 00:24:55.088 "zcopy": false, 00:24:55.088 "get_zone_info": false, 00:24:55.088 "zone_management": false, 00:24:55.088 "zone_append": false, 00:24:55.088 "compare": true, 00:24:55.088 "compare_and_write": false, 00:24:55.088 "abort": true, 00:24:55.088 "seek_hole": false, 00:24:55.088 "seek_data": false, 00:24:55.088 "copy": true, 00:24:55.088 "nvme_iov_md": false 00:24:55.088 }, 00:24:55.088 "driver_specific": { 00:24:55.088 "nvme": [ 00:24:55.088 { 00:24:55.088 "pci_address": "0000:00:11.0", 00:24:55.088 "trid": { 00:24:55.088 "trtype": "PCIe", 00:24:55.088 "traddr": "0000:00:11.0" 00:24:55.088 }, 00:24:55.088 "ctrlr_data": { 00:24:55.088 "cntlid": 0, 00:24:55.088 "vendor_id": "0x1b36", 00:24:55.088 "model_number": "QEMU NVMe Ctrl", 00:24:55.088 "serial_number": "12341", 00:24:55.088 "firmware_revision": "8.0.0", 00:24:55.088 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:55.088 "oacs": { 00:24:55.088 "security": 0, 00:24:55.088 "format": 1, 00:24:55.088 "firmware": 0, 00:24:55.088 "ns_manage": 1 00:24:55.088 }, 00:24:55.088 "multi_ctrlr": false, 00:24:55.088 "ana_reporting": false 00:24:55.088 }, 00:24:55.088 "vs": { 00:24:55.088 "nvme_version": "1.4" 00:24:55.088 }, 00:24:55.088 "ns_data": { 00:24:55.088 "id": 1, 00:24:55.088 "can_share": false 00:24:55.088 } 00:24:55.088 } 00:24:55.088 ], 00:24:55.088 "mp_policy": "active_passive" 00:24:55.088 } 00:24:55.088 } 00:24:55.088 ]' 00:24:55.088 21:07:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:55.088 21:07:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:55.088 21:07:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:55.088 21:07:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:55.088 21:07:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:55.088 21:07:13 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:55.088 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:55.088 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:55.089 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:55.089 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:55.089 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:55.350 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=db98ae6b-6895-4380-97c7-faf115da5d71 00:24:55.350 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:55.350 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db98ae6b-6895-4380-97c7-faf115da5d71 00:24:55.612 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:55.873 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=4b1c9cbc-7737-4aec-995f-bb9d3a3bf7de 00:24:55.873 21:07:13 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4b1c9cbc-7737-4aec-995f-bb9d3a3bf7de 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:56.134 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:56.396 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:56.396 { 00:24:56.396 "name": "4ee88a5e-2617-4222-b434-8e12dfff97a3", 00:24:56.396 "aliases": [ 00:24:56.396 "lvs/nvme0n1p0" 00:24:56.396 ], 00:24:56.396 "product_name": "Logical Volume", 00:24:56.396 "block_size": 4096, 00:24:56.396 "num_blocks": 26476544, 00:24:56.396 "uuid": "4ee88a5e-2617-4222-b434-8e12dfff97a3", 00:24:56.396 "assigned_rate_limits": { 00:24:56.396 "rw_ios_per_sec": 0, 00:24:56.396 "rw_mbytes_per_sec": 0, 00:24:56.396 "r_mbytes_per_sec": 0, 00:24:56.396 "w_mbytes_per_sec": 0 00:24:56.396 }, 00:24:56.396 "claimed": false, 00:24:56.396 "zoned": false, 00:24:56.396 "supported_io_types": { 00:24:56.396 "read": true, 00:24:56.396 "write": true, 00:24:56.396 "unmap": true, 00:24:56.396 "flush": false, 00:24:56.396 "reset": true, 00:24:56.396 "nvme_admin": false, 00:24:56.396 "nvme_io": false, 00:24:56.396 "nvme_io_md": false, 00:24:56.396 "write_zeroes": true, 00:24:56.396 "zcopy": false, 00:24:56.396 "get_zone_info": false, 00:24:56.396 "zone_management": false, 00:24:56.396 "zone_append": false, 00:24:56.396 "compare": false, 00:24:56.396 "compare_and_write": false, 00:24:56.396 "abort": false, 00:24:56.396 "seek_hole": true, 00:24:56.396 "seek_data": true, 00:24:56.396 "copy": false, 00:24:56.396 "nvme_iov_md": false 00:24:56.396 }, 00:24:56.396 "driver_specific": { 00:24:56.396 "lvol": { 00:24:56.396 "lvol_store_uuid": "4b1c9cbc-7737-4aec-995f-bb9d3a3bf7de", 00:24:56.396 "base_bdev": "nvme0n1", 00:24:56.396 "thin_provision": true, 00:24:56.396 "num_allocated_clusters": 0, 00:24:56.396 "snapshot": false, 00:24:56.396 "clone": false, 00:24:56.396 "esnap_clone": false 00:24:56.396 } 00:24:56.396 } 00:24:56.396 } 00:24:56.396 ]' 00:24:56.396 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:56.396 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:56.396 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:56.396 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:56.396 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:56.396 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:56.396 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:56.396 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:56.396 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:56.657 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:56.657 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:56.657 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:56.657 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:56.657 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:56.657 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:56.657 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:56.657 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:56.960 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:56.960 { 00:24:56.960 "name": "4ee88a5e-2617-4222-b434-8e12dfff97a3", 00:24:56.960 "aliases": [ 00:24:56.960 "lvs/nvme0n1p0" 00:24:56.960 ], 00:24:56.960 "product_name": "Logical Volume", 00:24:56.960 "block_size": 4096, 00:24:56.960 "num_blocks": 26476544, 00:24:56.960 "uuid": "4ee88a5e-2617-4222-b434-8e12dfff97a3", 00:24:56.960 "assigned_rate_limits": { 00:24:56.960 "rw_ios_per_sec": 0, 00:24:56.960 "rw_mbytes_per_sec": 0, 00:24:56.960 "r_mbytes_per_sec": 0, 00:24:56.960 "w_mbytes_per_sec": 0 00:24:56.960 }, 00:24:56.960 "claimed": false, 00:24:56.960 "zoned": false, 00:24:56.960 "supported_io_types": { 00:24:56.960 "read": true, 00:24:56.960 "write": true, 00:24:56.960 "unmap": true, 00:24:56.960 "flush": false, 00:24:56.960 "reset": true, 00:24:56.960 "nvme_admin": false, 00:24:56.960 "nvme_io": false, 00:24:56.960 "nvme_io_md": false, 00:24:56.960 "write_zeroes": true, 00:24:56.960 "zcopy": false, 00:24:56.960 "get_zone_info": false, 00:24:56.960 "zone_management": false, 00:24:56.960 "zone_append": false, 00:24:56.960 "compare": false, 00:24:56.960 "compare_and_write": false, 00:24:56.960 "abort": false, 00:24:56.960 "seek_hole": true, 00:24:56.960 "seek_data": true, 00:24:56.960 "copy": false, 00:24:56.960 "nvme_iov_md": false 00:24:56.960 }, 00:24:56.960 "driver_specific": { 00:24:56.960 "lvol": { 00:24:56.960 "lvol_store_uuid": "4b1c9cbc-7737-4aec-995f-bb9d3a3bf7de", 00:24:56.960 "base_bdev": "nvme0n1", 00:24:56.960 "thin_provision": true, 00:24:56.961 "num_allocated_clusters": 0, 00:24:56.961 "snapshot": false, 00:24:56.961 "clone": false, 00:24:56.961 "esnap_clone": false 00:24:56.961 } 00:24:56.961 } 00:24:56.961 } 00:24:56.961 ]' 00:24:56.961 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:56.961 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:56.961 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:56.961 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:56.961 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:56.961 21:07:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:56.961 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:56.961 21:07:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:57.243 21:07:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:57.243 21:07:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:57.243 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:57.243 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:57.243 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:57.243 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:57.243 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4ee88a5e-2617-4222-b434-8e12dfff97a3 00:24:57.243 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:57.243 { 00:24:57.243 "name": "4ee88a5e-2617-4222-b434-8e12dfff97a3", 00:24:57.243 "aliases": [ 00:24:57.243 "lvs/nvme0n1p0" 00:24:57.243 ], 00:24:57.243 "product_name": "Logical Volume", 00:24:57.243 "block_size": 4096, 00:24:57.243 "num_blocks": 26476544, 00:24:57.243 "uuid": "4ee88a5e-2617-4222-b434-8e12dfff97a3", 00:24:57.243 "assigned_rate_limits": { 00:24:57.243 "rw_ios_per_sec": 0, 00:24:57.243 "rw_mbytes_per_sec": 0, 00:24:57.243 "r_mbytes_per_sec": 0, 00:24:57.243 "w_mbytes_per_sec": 0 00:24:57.243 }, 00:24:57.243 "claimed": false, 00:24:57.244 "zoned": false, 00:24:57.244 "supported_io_types": { 00:24:57.244 "read": true, 00:24:57.244 "write": true, 00:24:57.244 "unmap": true, 00:24:57.244 "flush": false, 00:24:57.244 "reset": true, 00:24:57.244 "nvme_admin": false, 00:24:57.244 "nvme_io": false, 00:24:57.244 "nvme_io_md": false, 00:24:57.244 "write_zeroes": true, 00:24:57.244 "zcopy": false, 00:24:57.244 "get_zone_info": false, 00:24:57.244 "zone_management": false, 00:24:57.244 "zone_append": false, 00:24:57.244 "compare": false, 00:24:57.244 "compare_and_write": false, 00:24:57.244 "abort": false, 00:24:57.244 "seek_hole": true, 00:24:57.244 "seek_data": true, 00:24:57.244 "copy": false, 00:24:57.244 "nvme_iov_md": false 00:24:57.244 }, 00:24:57.244 "driver_specific": { 00:24:57.244 "lvol": { 00:24:57.244 "lvol_store_uuid": "4b1c9cbc-7737-4aec-995f-bb9d3a3bf7de", 00:24:57.244 "base_bdev": "nvme0n1", 00:24:57.244 "thin_provision": true, 00:24:57.244 "num_allocated_clusters": 0, 00:24:57.244 "snapshot": false, 00:24:57.244 "clone": false, 00:24:57.244 "esnap_clone": false 00:24:57.244 } 00:24:57.244 } 00:24:57.244 } 00:24:57.244 ]' 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4ee88a5e-2617-4222-b434-8e12dfff97a3 --l2p_dram_limit 10' 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:57.244 21:07:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4ee88a5e-2617-4222-b434-8e12dfff97a3 --l2p_dram_limit 10 -c nvc0n1p0 00:24:57.505 [2024-11-20 21:07:15.476513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.505 [2024-11-20 21:07:15.476552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:57.505 [2024-11-20 21:07:15.476563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:57.505 [2024-11-20 21:07:15.476571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.505 [2024-11-20 21:07:15.476612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.505 [2024-11-20 21:07:15.476621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.505 [2024-11-20 21:07:15.476631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:57.505 [2024-11-20 21:07:15.476641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.505 [2024-11-20 21:07:15.476655] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:57.505 [2024-11-20 21:07:15.476869] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:57.505 [2024-11-20 21:07:15.476883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.505 [2024-11-20 21:07:15.476890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.505 [2024-11-20 21:07:15.476902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:24:57.505 [2024-11-20 21:07:15.476910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.505 [2024-11-20 21:07:15.476933] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4c9ab7ca-d499-4cca-a63d-6591be32da33 00:24:57.505 [2024-11-20 21:07:15.477871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.505 [2024-11-20 21:07:15.477891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:57.505 [2024-11-20 21:07:15.477903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:57.505 [2024-11-20 21:07:15.477910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.505 [2024-11-20 21:07:15.482537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.505 [2024-11-20 21:07:15.482643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.505 [2024-11-20 21:07:15.482658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.587 ms 00:24:57.506 [2024-11-20 21:07:15.482664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.506 [2024-11-20 21:07:15.482764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.506 [2024-11-20 21:07:15.482777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.506 [2024-11-20 21:07:15.482785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:57.506 [2024-11-20 21:07:15.482790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.506 [2024-11-20 21:07:15.482827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.506 [2024-11-20 21:07:15.482834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:57.506 [2024-11-20 21:07:15.482843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:57.506 [2024-11-20 21:07:15.482849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.506 [2024-11-20 21:07:15.482868] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:57.506 [2024-11-20 21:07:15.484106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.506 [2024-11-20 21:07:15.484126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.506 [2024-11-20 21:07:15.484134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:24:57.506 [2024-11-20 21:07:15.484141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.506 [2024-11-20 21:07:15.484166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.506 [2024-11-20 21:07:15.484174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:57.506 [2024-11-20 21:07:15.484181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:57.506 [2024-11-20 21:07:15.484190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.506 [2024-11-20 21:07:15.484203] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:57.506 [2024-11-20 21:07:15.484315] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:57.506 [2024-11-20 21:07:15.484330] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:57.506 [2024-11-20 21:07:15.484340] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:57.506 [2024-11-20 21:07:15.484351] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:57.506 [2024-11-20 21:07:15.484359] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:57.506 [2024-11-20 21:07:15.484371] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:57.506 [2024-11-20 21:07:15.484378] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:57.506 [2024-11-20 21:07:15.484388] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:57.506 [2024-11-20 21:07:15.484394] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:57.506 [2024-11-20 21:07:15.484400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.506 [2024-11-20 21:07:15.484409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:57.506 [2024-11-20 21:07:15.484415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:24:57.506 [2024-11-20 21:07:15.484423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.506 [2024-11-20 21:07:15.484489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.506 [2024-11-20 21:07:15.484499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:57.506 [2024-11-20 21:07:15.484505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:24:57.506 [2024-11-20 21:07:15.484511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.506 [2024-11-20 21:07:15.484585] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:57.506 [2024-11-20 21:07:15.484594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:57.506 [2024-11-20 21:07:15.484603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:57.506 [2024-11-20 21:07:15.484614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:57.506 [2024-11-20 21:07:15.484627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:57.506 [2024-11-20 21:07:15.484763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:57.506 [2024-11-20 21:07:15.484771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:57.506 [2024-11-20 21:07:15.484785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:57.506 [2024-11-20 21:07:15.484792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:57.506 [2024-11-20 21:07:15.484800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:57.506 [2024-11-20 21:07:15.484809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:57.506 [2024-11-20 21:07:15.484816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:57.506 [2024-11-20 21:07:15.484823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:57.506 [2024-11-20 21:07:15.484837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:57.506 [2024-11-20 21:07:15.484843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:57.506 [2024-11-20 21:07:15.484857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.506 [2024-11-20 21:07:15.484870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:57.506 [2024-11-20 21:07:15.484877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.506 [2024-11-20 21:07:15.484891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:57.506 [2024-11-20 21:07:15.484896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.506 [2024-11-20 21:07:15.484909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:57.506 [2024-11-20 21:07:15.484919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.506 [2024-11-20 21:07:15.484932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:57.506 [2024-11-20 21:07:15.484938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:57.506 [2024-11-20 21:07:15.484954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:57.506 [2024-11-20 21:07:15.484961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:57.506 [2024-11-20 21:07:15.484966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:57.506 [2024-11-20 21:07:15.484973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:57.506 [2024-11-20 21:07:15.484979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:57.506 [2024-11-20 21:07:15.484986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.506 [2024-11-20 21:07:15.484992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:57.506 [2024-11-20 21:07:15.484999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:57.506 [2024-11-20 21:07:15.485004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.506 [2024-11-20 21:07:15.485011] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:57.506 [2024-11-20 21:07:15.485017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:57.506 [2024-11-20 21:07:15.485027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:57.506 [2024-11-20 21:07:15.485038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.506 [2024-11-20 21:07:15.485047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:57.506 [2024-11-20 21:07:15.485053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:57.506 [2024-11-20 21:07:15.485060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:57.507 [2024-11-20 21:07:15.485066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:57.507 [2024-11-20 21:07:15.485073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:57.507 [2024-11-20 21:07:15.485079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:57.507 [2024-11-20 21:07:15.485090] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:57.507 [2024-11-20 21:07:15.485102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:57.507 [2024-11-20 21:07:15.485110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:57.507 [2024-11-20 21:07:15.485116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:57.507 [2024-11-20 21:07:15.485125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:57.507 [2024-11-20 21:07:15.485132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:57.507 [2024-11-20 21:07:15.485139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:57.507 [2024-11-20 21:07:15.485144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:57.507 [2024-11-20 21:07:15.485152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:57.507 [2024-11-20 21:07:15.485157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:57.507 [2024-11-20 21:07:15.485164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:57.507 [2024-11-20 21:07:15.485169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:57.507 [2024-11-20 21:07:15.485175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:57.507 [2024-11-20 21:07:15.485180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:57.507 [2024-11-20 21:07:15.485187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:57.507 [2024-11-20 21:07:15.485192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:57.507 [2024-11-20 21:07:15.485200] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:57.507 [2024-11-20 21:07:15.485207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:57.507 [2024-11-20 21:07:15.485214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:57.507 [2024-11-20 21:07:15.485219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:57.507 [2024-11-20 21:07:15.485226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:57.507 [2024-11-20 21:07:15.485231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:57.507 [2024-11-20 21:07:15.485239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.507 [2024-11-20 21:07:15.485246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:57.507 [2024-11-20 21:07:15.485255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:24:57.507 [2024-11-20 21:07:15.485260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.507 [2024-11-20 21:07:15.485296] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:57.507 [2024-11-20 21:07:15.485307] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:01.718 [2024-11-20 21:07:19.555726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.555997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:01.718 [2024-11-20 21:07:19.556261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4070.405 ms 00:25:01.718 [2024-11-20 21:07:19.556306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.570475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.570666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:01.718 [2024-11-20 21:07:19.570744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.028 ms 00:25:01.718 [2024-11-20 21:07:19.570802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.570962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.571452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:01.718 [2024-11-20 21:07:19.571519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:25:01.718 [2024-11-20 21:07:19.571544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.584637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.584849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:01.718 [2024-11-20 21:07:19.585281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.992 ms 00:25:01.718 [2024-11-20 21:07:19.585309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.585362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.585378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:01.718 [2024-11-20 21:07:19.585390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:01.718 [2024-11-20 21:07:19.585402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.586012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.586045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:01.718 [2024-11-20 21:07:19.586081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:25:01.718 [2024-11-20 21:07:19.586102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.586264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.586287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:01.718 [2024-11-20 21:07:19.586303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:25:01.718 [2024-11-20 21:07:19.586319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.595480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.595525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:01.718 [2024-11-20 21:07:19.595539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.127 ms 00:25:01.718 [2024-11-20 21:07:19.595547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.605619] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:01.718 [2024-11-20 21:07:19.609533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.609582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:01.718 [2024-11-20 21:07:19.609599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.897 ms 00:25:01.718 [2024-11-20 21:07:19.609610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.717169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.717242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:01.718 [2024-11-20 21:07:19.717262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.525 ms 00:25:01.718 [2024-11-20 21:07:19.717283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.717471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.717486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:01.718 [2024-11-20 21:07:19.717504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:25:01.718 [2024-11-20 21:07:19.717515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.723831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.724054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:01.718 [2024-11-20 21:07:19.724080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.278 ms 00:25:01.718 [2024-11-20 21:07:19.724096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.729304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.729358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:01.718 [2024-11-20 21:07:19.729370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.176 ms 00:25:01.718 [2024-11-20 21:07:19.729380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.729702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.729717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:01.718 [2024-11-20 21:07:19.729727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:25:01.718 [2024-11-20 21:07:19.729740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.774017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.774094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:01.718 [2024-11-20 21:07:19.774107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.211 ms 00:25:01.718 [2024-11-20 21:07:19.774126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.718 [2024-11-20 21:07:19.781272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.718 [2024-11-20 21:07:19.781329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:01.719 [2024-11-20 21:07:19.781341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.067 ms 00:25:01.719 [2024-11-20 21:07:19.781352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.719 [2024-11-20 21:07:19.787222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.719 [2024-11-20 21:07:19.787398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:01.719 [2024-11-20 21:07:19.787416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.822 ms 00:25:01.719 [2024-11-20 21:07:19.787426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.719 [2024-11-20 21:07:19.793900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.719 [2024-11-20 21:07:19.794092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:01.719 [2024-11-20 21:07:19.794110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.432 ms 00:25:01.719 [2024-11-20 21:07:19.794124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.719 [2024-11-20 21:07:19.794173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.719 [2024-11-20 21:07:19.794186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:01.719 [2024-11-20 21:07:19.794201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:01.719 [2024-11-20 21:07:19.794211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.719 [2024-11-20 21:07:19.794285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.719 [2024-11-20 21:07:19.794299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:01.719 [2024-11-20 21:07:19.794310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:01.719 [2024-11-20 21:07:19.794321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.719 [2024-11-20 21:07:19.795460] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4318.456 ms, result 0 00:25:01.719 { 00:25:01.719 "name": "ftl0", 00:25:01.719 "uuid": "4c9ab7ca-d499-4cca-a63d-6591be32da33" 00:25:01.719 } 00:25:01.719 21:07:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:01.719 21:07:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:02.293 /dev/nbd0 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:02.293 1+0 records in 00:25:02.293 1+0 records out 00:25:02.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354773 s, 11.5 MB/s 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:02.293 21:07:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:02.554 [2024-11-20 21:07:20.444806] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:25:02.554 [2024-11-20 21:07:20.444924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91161 ] 00:25:02.554 [2024-11-20 21:07:20.592994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.554 [2024-11-20 21:07:20.634254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.941  [2024-11-20T21:07:23.002Z] Copying: 185/1024 [MB] (185 MBps) [2024-11-20T21:07:23.936Z] Copying: 376/1024 [MB] (191 MBps) [2024-11-20T21:07:24.872Z] Copying: 635/1024 [MB] (259 MBps) [2024-11-20T21:07:25.438Z] Copying: 886/1024 [MB] (250 MBps) [2024-11-20T21:07:25.698Z] Copying: 1024/1024 [MB] (average 225 MBps) 00:25:07.579 00:25:07.579 21:07:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:09.494 21:07:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:09.753 [2024-11-20 21:07:27.672028] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:25:09.753 [2024-11-20 21:07:27.672165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91242 ] 00:25:09.753 [2024-11-20 21:07:27.813369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.753 [2024-11-20 21:07:27.848816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.127  [2024-11-20T21:07:30.181Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-20T21:07:31.116Z] Copying: 23/1024 [MB] (11 MBps) [2024-11-20T21:07:32.107Z] Copying: 36/1024 [MB] (13 MBps) [2024-11-20T21:07:33.040Z] Copying: 56/1024 [MB] (19 MBps) [2024-11-20T21:07:34.099Z] Copying: 76/1024 [MB] (20 MBps) [2024-11-20T21:07:35.033Z] Copying: 98/1024 [MB] (21 MBps) [2024-11-20T21:07:35.968Z] Copying: 117/1024 [MB] (19 MBps) [2024-11-20T21:07:37.341Z] Copying: 137/1024 [MB] (19 MBps) [2024-11-20T21:07:38.275Z] Copying: 166/1024 [MB] (28 MBps) [2024-11-20T21:07:39.210Z] Copying: 195/1024 [MB] (29 MBps) [2024-11-20T21:07:40.143Z] Copying: 216/1024 [MB] (20 MBps) [2024-11-20T21:07:41.077Z] Copying: 233/1024 [MB] (17 MBps) [2024-11-20T21:07:42.011Z] Copying: 253/1024 [MB] (19 MBps) [2024-11-20T21:07:42.944Z] Copying: 274/1024 [MB] (21 MBps) [2024-11-20T21:07:44.319Z] Copying: 292/1024 [MB] (17 MBps) [2024-11-20T21:07:45.281Z] Copying: 308/1024 [MB] (16 MBps) [2024-11-20T21:07:46.215Z] Copying: 328/1024 [MB] (20 MBps) [2024-11-20T21:07:47.149Z] Copying: 346/1024 [MB] (18 MBps) [2024-11-20T21:07:48.081Z] Copying: 371/1024 [MB] (24 MBps) [2024-11-20T21:07:49.015Z] Copying: 389/1024 [MB] (17 MBps) [2024-11-20T21:07:49.948Z] Copying: 403/1024 [MB] (14 MBps) [2024-11-20T21:07:51.324Z] Copying: 423/1024 [MB] (19 MBps) [2024-11-20T21:07:52.258Z] Copying: 443/1024 [MB] (20 MBps) [2024-11-20T21:07:53.191Z] Copying: 463/1024 [MB] (20 MBps) [2024-11-20T21:07:54.124Z] Copying: 479/1024 [MB] (16 MBps) [2024-11-20T21:07:55.058Z] Copying: 498/1024 [MB] (18 MBps) [2024-11-20T21:07:55.991Z] Copying: 519/1024 [MB] (21 MBps) [2024-11-20T21:07:57.362Z] Copying: 541/1024 [MB] (22 MBps) [2024-11-20T21:07:57.927Z] Copying: 562/1024 [MB] (20 MBps) [2024-11-20T21:07:59.309Z] Copying: 581/1024 [MB] (19 MBps) [2024-11-20T21:08:00.245Z] Copying: 600/1024 [MB] (18 MBps) [2024-11-20T21:08:01.179Z] Copying: 619/1024 [MB] (19 MBps) [2024-11-20T21:08:02.112Z] Copying: 639/1024 [MB] (20 MBps) [2024-11-20T21:08:03.044Z] Copying: 669/1024 [MB] (29 MBps) [2024-11-20T21:08:03.979Z] Copying: 704/1024 [MB] (34 MBps) [2024-11-20T21:08:05.353Z] Copying: 737/1024 [MB] (32 MBps) [2024-11-20T21:08:06.286Z] Copying: 759/1024 [MB] (22 MBps) [2024-11-20T21:08:07.221Z] Copying: 784/1024 [MB] (24 MBps) [2024-11-20T21:08:08.155Z] Copying: 805/1024 [MB] (21 MBps) [2024-11-20T21:08:09.088Z] Copying: 829/1024 [MB] (23 MBps) [2024-11-20T21:08:10.040Z] Copying: 851/1024 [MB] (21 MBps) [2024-11-20T21:08:10.973Z] Copying: 873/1024 [MB] (22 MBps) [2024-11-20T21:08:12.347Z] Copying: 895/1024 [MB] (21 MBps) [2024-11-20T21:08:13.282Z] Copying: 915/1024 [MB] (20 MBps) [2024-11-20T21:08:14.216Z] Copying: 934/1024 [MB] (18 MBps) [2024-11-20T21:08:15.149Z] Copying: 954/1024 [MB] (20 MBps) [2024-11-20T21:08:16.182Z] Copying: 984/1024 [MB] (29 MBps) [2024-11-20T21:08:16.762Z] Copying: 1007/1024 [MB] (23 MBps) [2024-11-20T21:08:17.022Z] Copying: 1024/1024 [MB] (average 21 MBps) 00:25:58.903 00:25:58.903 21:08:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:58.903 21:08:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:59.165 21:08:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:59.165 [2024-11-20 21:08:17.202296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.202342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:59.165 [2024-11-20 21:08:17.202356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:59.165 [2024-11-20 21:08:17.202365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.202390] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:59.165 [2024-11-20 21:08:17.202844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.202875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:59.165 [2024-11-20 21:08:17.202884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:25:59.165 [2024-11-20 21:08:17.202896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.205436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.205473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:59.165 [2024-11-20 21:08:17.205483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.519 ms 00:25:59.165 [2024-11-20 21:08:17.205492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.221774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.221811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:59.165 [2024-11-20 21:08:17.221821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.264 ms 00:25:59.165 [2024-11-20 21:08:17.221832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.227965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.227996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:59.165 [2024-11-20 21:08:17.228005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.102 ms 00:25:59.165 [2024-11-20 21:08:17.228019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.230200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.230237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:59.165 [2024-11-20 21:08:17.230246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:25:59.165 [2024-11-20 21:08:17.230255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.235270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.235312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:59.165 [2024-11-20 21:08:17.235321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.983 ms 00:25:59.165 [2024-11-20 21:08:17.235331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.235448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.235460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:59.165 [2024-11-20 21:08:17.235469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:25:59.165 [2024-11-20 21:08:17.235478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.237493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.237534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:59.165 [2024-11-20 21:08:17.237543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.998 ms 00:25:59.165 [2024-11-20 21:08:17.237552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.239130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.239169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:59.165 [2024-11-20 21:08:17.239178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:25:59.165 [2024-11-20 21:08:17.239187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.240407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.240442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:59.165 [2024-11-20 21:08:17.240450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:25:59.165 [2024-11-20 21:08:17.240459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.241574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.165 [2024-11-20 21:08:17.241609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:59.165 [2024-11-20 21:08:17.241617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:25:59.165 [2024-11-20 21:08:17.241626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.165 [2024-11-20 21:08:17.241654] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:59.165 [2024-11-20 21:08:17.241670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:59.165 [2024-11-20 21:08:17.241680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:59.165 [2024-11-20 21:08:17.241689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:59.165 [2024-11-20 21:08:17.241697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:59.165 [2024-11-20 21:08:17.241708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:59.165 [2024-11-20 21:08:17.241715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:59.165 [2024-11-20 21:08:17.241724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:59.165 [2024-11-20 21:08:17.241732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:59.165 [2024-11-20 21:08:17.241741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:59.165 [2024-11-20 21:08:17.241765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:59.165 [2024-11-20 21:08:17.241774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.241998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:59.166 [2024-11-20 21:08:17.242534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:59.167 [2024-11-20 21:08:17.242553] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:59.167 [2024-11-20 21:08:17.242563] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c9ab7ca-d499-4cca-a63d-6591be32da33 00:25:59.167 [2024-11-20 21:08:17.242573] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:59.167 [2024-11-20 21:08:17.242580] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:59.167 [2024-11-20 21:08:17.242588] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:59.167 [2024-11-20 21:08:17.242596] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:59.167 [2024-11-20 21:08:17.242605] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:59.167 [2024-11-20 21:08:17.242612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:59.167 [2024-11-20 21:08:17.242621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:59.167 [2024-11-20 21:08:17.242628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:59.167 [2024-11-20 21:08:17.242635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:59.167 [2024-11-20 21:08:17.242642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.167 [2024-11-20 21:08:17.242660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:59.167 [2024-11-20 21:08:17.242668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:25:59.167 [2024-11-20 21:08:17.242680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.244047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.167 [2024-11-20 21:08:17.244079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:59.167 [2024-11-20 21:08:17.244089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:25:59.167 [2024-11-20 21:08:17.244097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.244171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.167 [2024-11-20 21:08:17.244186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:59.167 [2024-11-20 21:08:17.244197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:59.167 [2024-11-20 21:08:17.244206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.249239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.249274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:59.167 [2024-11-20 21:08:17.249284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.249293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.249346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.249356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:59.167 [2024-11-20 21:08:17.249366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.249377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.249436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.249450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:59.167 [2024-11-20 21:08:17.249458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.249466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.249485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.249496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:59.167 [2024-11-20 21:08:17.249504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.249514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.258286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.258335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:59.167 [2024-11-20 21:08:17.258344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.258354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.265795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.265836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:59.167 [2024-11-20 21:08:17.265845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.265857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.265900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.265918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:59.167 [2024-11-20 21:08:17.265926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.265936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.265987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.265999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:59.167 [2024-11-20 21:08:17.266007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.266025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.266089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.266102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:59.167 [2024-11-20 21:08:17.266110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.266119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.266149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.266160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:59.167 [2024-11-20 21:08:17.266168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.266177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.266215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.266228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:59.167 [2024-11-20 21:08:17.266236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.266246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.266287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.167 [2024-11-20 21:08:17.266307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:59.167 [2024-11-20 21:08:17.266315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.167 [2024-11-20 21:08:17.266326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.167 [2024-11-20 21:08:17.266449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.123 ms, result 0 00:25:59.167 true 00:25:59.428 21:08:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91013 00:25:59.428 21:08:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91013 00:25:59.428 21:08:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:59.428 [2024-11-20 21:08:17.356142] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:25:59.428 [2024-11-20 21:08:17.356269] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91758 ] 00:25:59.428 [2024-11-20 21:08:17.496679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.428 [2024-11-20 21:08:17.516785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.813  [2024-11-20T21:08:19.874Z] Copying: 189/1024 [MB] (189 MBps) [2024-11-20T21:08:20.816Z] Copying: 386/1024 [MB] (196 MBps) [2024-11-20T21:08:21.757Z] Copying: 646/1024 [MB] (259 MBps) [2024-11-20T21:08:22.327Z] Copying: 904/1024 [MB] (257 MBps) [2024-11-20T21:08:22.327Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:26:04.208 00:26:04.208 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91013 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:04.208 21:08:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:04.208 [2024-11-20 21:08:22.233472] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:26:04.208 [2024-11-20 21:08:22.233590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91816 ] 00:26:04.469 [2024-11-20 21:08:22.375931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.469 [2024-11-20 21:08:22.396222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.469 [2024-11-20 21:08:22.479561] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:04.469 [2024-11-20 21:08:22.479618] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:04.469 [2024-11-20 21:08:22.541382] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:04.469 [2024-11-20 21:08:22.541604] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:04.469 [2024-11-20 21:08:22.541842] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:05.042 [2024-11-20 21:08:22.868765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.868809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:05.042 [2024-11-20 21:08:22.868822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:05.042 [2024-11-20 21:08:22.868830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.868883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.868900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:05.042 [2024-11-20 21:08:22.868911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:05.042 [2024-11-20 21:08:22.868918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.868937] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:05.042 [2024-11-20 21:08:22.869240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:05.042 [2024-11-20 21:08:22.869277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.869284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:05.042 [2024-11-20 21:08:22.869293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:26:05.042 [2024-11-20 21:08:22.869300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.870526] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:05.042 [2024-11-20 21:08:22.873284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.873321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:05.042 [2024-11-20 21:08:22.873331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.756 ms 00:26:05.042 [2024-11-20 21:08:22.873338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.873392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.873402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:05.042 [2024-11-20 21:08:22.873411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:05.042 [2024-11-20 21:08:22.873418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.878915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.878948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:05.042 [2024-11-20 21:08:22.878957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.440 ms 00:26:05.042 [2024-11-20 21:08:22.878965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.879052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.879061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:05.042 [2024-11-20 21:08:22.879069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:26:05.042 [2024-11-20 21:08:22.879080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.879123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.879132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:05.042 [2024-11-20 21:08:22.879140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:05.042 [2024-11-20 21:08:22.879147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.879168] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:05.042 [2024-11-20 21:08:22.880592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.880621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:05.042 [2024-11-20 21:08:22.880631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:26:05.042 [2024-11-20 21:08:22.880643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.880673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.880682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:05.042 [2024-11-20 21:08:22.880690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:05.042 [2024-11-20 21:08:22.880697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.880716] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:05.042 [2024-11-20 21:08:22.880734] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:05.042 [2024-11-20 21:08:22.880789] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:05.042 [2024-11-20 21:08:22.880807] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:05.042 [2024-11-20 21:08:22.880910] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:05.042 [2024-11-20 21:08:22.880924] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:05.042 [2024-11-20 21:08:22.880934] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:05.042 [2024-11-20 21:08:22.880948] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:05.042 [2024-11-20 21:08:22.880957] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:05.042 [2024-11-20 21:08:22.880965] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:05.042 [2024-11-20 21:08:22.880973] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:05.042 [2024-11-20 21:08:22.880982] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:05.042 [2024-11-20 21:08:22.880989] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:05.042 [2024-11-20 21:08:22.880999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.881006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:05.042 [2024-11-20 21:08:22.881017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:26:05.042 [2024-11-20 21:08:22.881024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.881106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.042 [2024-11-20 21:08:22.881114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:05.042 [2024-11-20 21:08:22.881121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:26:05.042 [2024-11-20 21:08:22.881128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.042 [2024-11-20 21:08:22.881229] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:05.042 [2024-11-20 21:08:22.881239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:05.042 [2024-11-20 21:08:22.881255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.042 [2024-11-20 21:08:22.881264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.042 [2024-11-20 21:08:22.881276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:05.042 [2024-11-20 21:08:22.881287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:05.042 [2024-11-20 21:08:22.881295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:05.042 [2024-11-20 21:08:22.881303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:05.042 [2024-11-20 21:08:22.881311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:05.042 [2024-11-20 21:08:22.881318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.042 [2024-11-20 21:08:22.881326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:05.042 [2024-11-20 21:08:22.881333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:05.042 [2024-11-20 21:08:22.881340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.042 [2024-11-20 21:08:22.881348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:05.042 [2024-11-20 21:08:22.881355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:05.042 [2024-11-20 21:08:22.881363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.042 [2024-11-20 21:08:22.881370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:05.042 [2024-11-20 21:08:22.881377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:05.042 [2024-11-20 21:08:22.881385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.042 [2024-11-20 21:08:22.881393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:05.042 [2024-11-20 21:08:22.881400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:05.042 [2024-11-20 21:08:22.881413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.042 [2024-11-20 21:08:22.881420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:05.042 [2024-11-20 21:08:22.881429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:05.042 [2024-11-20 21:08:22.881436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.042 [2024-11-20 21:08:22.881443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:05.042 [2024-11-20 21:08:22.881451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:05.042 [2024-11-20 21:08:22.881458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.042 [2024-11-20 21:08:22.881466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:05.042 [2024-11-20 21:08:22.881473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:05.042 [2024-11-20 21:08:22.881480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.042 [2024-11-20 21:08:22.881488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:05.043 [2024-11-20 21:08:22.881495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:05.043 [2024-11-20 21:08:22.881503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.043 [2024-11-20 21:08:22.881510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:05.043 [2024-11-20 21:08:22.881518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:05.043 [2024-11-20 21:08:22.881526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.043 [2024-11-20 21:08:22.881535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:05.043 [2024-11-20 21:08:22.881542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:05.043 [2024-11-20 21:08:22.881550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.043 [2024-11-20 21:08:22.881558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:05.043 [2024-11-20 21:08:22.881567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:05.043 [2024-11-20 21:08:22.881574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.043 [2024-11-20 21:08:22.881581] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:05.043 [2024-11-20 21:08:22.881590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:05.043 [2024-11-20 21:08:22.881598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.043 [2024-11-20 21:08:22.881606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.043 [2024-11-20 21:08:22.881615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:05.043 [2024-11-20 21:08:22.881623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:05.043 [2024-11-20 21:08:22.881630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:05.043 [2024-11-20 21:08:22.881637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:05.043 [2024-11-20 21:08:22.881645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:05.043 [2024-11-20 21:08:22.881652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:05.043 [2024-11-20 21:08:22.881663] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:05.043 [2024-11-20 21:08:22.881673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.043 [2024-11-20 21:08:22.881681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:05.043 [2024-11-20 21:08:22.881688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:05.043 [2024-11-20 21:08:22.881695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:05.043 [2024-11-20 21:08:22.881702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:05.043 [2024-11-20 21:08:22.881708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:05.043 [2024-11-20 21:08:22.881715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:05.043 [2024-11-20 21:08:22.881722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:05.043 [2024-11-20 21:08:22.881734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:05.043 [2024-11-20 21:08:22.881740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:05.043 [2024-11-20 21:08:22.881760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:05.043 [2024-11-20 21:08:22.881768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:05.043 [2024-11-20 21:08:22.881775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:05.043 [2024-11-20 21:08:22.881782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:05.043 [2024-11-20 21:08:22.881789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:05.043 [2024-11-20 21:08:22.881797] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:05.043 [2024-11-20 21:08:22.881806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.043 [2024-11-20 21:08:22.881816] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:05.043 [2024-11-20 21:08:22.881823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:05.043 [2024-11-20 21:08:22.881833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:05.043 [2024-11-20 21:08:22.881841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:05.043 [2024-11-20 21:08:22.881849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.881857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:05.043 [2024-11-20 21:08:22.881864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:26:05.043 [2024-11-20 21:08:22.881871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.891768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.891801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:05.043 [2024-11-20 21:08:22.891810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.857 ms 00:26:05.043 [2024-11-20 21:08:22.891818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.891896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.891909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:05.043 [2024-11-20 21:08:22.891917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:05.043 [2024-11-20 21:08:22.891926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.922354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.922403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:05.043 [2024-11-20 21:08:22.922421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.374 ms 00:26:05.043 [2024-11-20 21:08:22.922430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.922490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.922503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:05.043 [2024-11-20 21:08:22.922512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:05.043 [2024-11-20 21:08:22.922519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.922946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.922978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:05.043 [2024-11-20 21:08:22.922988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:26:05.043 [2024-11-20 21:08:22.922998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.923137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.923152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:05.043 [2024-11-20 21:08:22.923165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:26:05.043 [2024-11-20 21:08:22.923173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.928974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.929016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:05.043 [2024-11-20 21:08:22.929027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.779 ms 00:26:05.043 [2024-11-20 21:08:22.929034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.931805] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:05.043 [2024-11-20 21:08:22.931840] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:05.043 [2024-11-20 21:08:22.931852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.931860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:05.043 [2024-11-20 21:08:22.931868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.733 ms 00:26:05.043 [2024-11-20 21:08:22.931875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.946651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.946691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:05.043 [2024-11-20 21:08:22.946703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.734 ms 00:26:05.043 [2024-11-20 21:08:22.946711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.948926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.948958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:05.043 [2024-11-20 21:08:22.948968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:26:05.043 [2024-11-20 21:08:22.948975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.953459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.953505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:05.043 [2024-11-20 21:08:22.953516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.060 ms 00:26:05.043 [2024-11-20 21:08:22.953524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.953890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.953920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:05.043 [2024-11-20 21:08:22.953930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:26:05.043 [2024-11-20 21:08:22.953937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.972443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.972498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:05.043 [2024-11-20 21:08:22.972511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.489 ms 00:26:05.043 [2024-11-20 21:08:22.972520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.980336] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:05.043 [2024-11-20 21:08:22.984295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.984345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:05.043 [2024-11-20 21:08:22.984364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.731 ms 00:26:05.043 [2024-11-20 21:08:22.984379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.984518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.984543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:05.043 [2024-11-20 21:08:22.984558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:05.043 [2024-11-20 21:08:22.984576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.984682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.984705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:05.043 [2024-11-20 21:08:22.984721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:05.043 [2024-11-20 21:08:22.984739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.984799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.984813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:05.043 [2024-11-20 21:08:22.984827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:05.043 [2024-11-20 21:08:22.984839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.984893] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:05.043 [2024-11-20 21:08:22.984908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.984921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:05.043 [2024-11-20 21:08:22.984935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:26:05.043 [2024-11-20 21:08:22.984947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.990212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.990263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:05.043 [2024-11-20 21:08:22.990274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.234 ms 00:26:05.043 [2024-11-20 21:08:22.990282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.990361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.043 [2024-11-20 21:08:22.990371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:05.043 [2024-11-20 21:08:22.990379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:05.043 [2024-11-20 21:08:22.990387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.043 [2024-11-20 21:08:22.991477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 122.301 ms, result 0 00:26:05.986  [2024-11-20T21:08:25.049Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-20T21:08:26.434Z] Copying: 27/1024 [MB] (10 MBps) [2024-11-20T21:08:27.007Z] Copying: 37/1024 [MB] (10 MBps) [2024-11-20T21:08:28.395Z] Copying: 48/1024 [MB] (10 MBps) [2024-11-20T21:08:29.340Z] Copying: 65/1024 [MB] (17 MBps) [2024-11-20T21:08:30.293Z] Copying: 80/1024 [MB] (14 MBps) [2024-11-20T21:08:31.237Z] Copying: 95/1024 [MB] (15 MBps) [2024-11-20T21:08:32.182Z] Copying: 118/1024 [MB] (22 MBps) [2024-11-20T21:08:33.127Z] Copying: 131/1024 [MB] (13 MBps) [2024-11-20T21:08:34.071Z] Copying: 149/1024 [MB] (17 MBps) [2024-11-20T21:08:35.017Z] Copying: 162/1024 [MB] (13 MBps) [2024-11-20T21:08:36.407Z] Copying: 174/1024 [MB] (11 MBps) [2024-11-20T21:08:37.351Z] Copying: 189/1024 [MB] (15 MBps) [2024-11-20T21:08:38.295Z] Copying: 202/1024 [MB] (13 MBps) [2024-11-20T21:08:39.238Z] Copying: 222/1024 [MB] (19 MBps) [2024-11-20T21:08:40.181Z] Copying: 234/1024 [MB] (11 MBps) [2024-11-20T21:08:41.126Z] Copying: 245/1024 [MB] (11 MBps) [2024-11-20T21:08:42.071Z] Copying: 258/1024 [MB] (12 MBps) [2024-11-20T21:08:43.091Z] Copying: 272/1024 [MB] (14 MBps) [2024-11-20T21:08:44.036Z] Copying: 283/1024 [MB] (10 MBps) [2024-11-20T21:08:45.425Z] Copying: 301/1024 [MB] (18 MBps) [2024-11-20T21:08:46.369Z] Copying: 311/1024 [MB] (10 MBps) [2024-11-20T21:08:47.314Z] Copying: 327/1024 [MB] (15 MBps) [2024-11-20T21:08:48.260Z] Copying: 341/1024 [MB] (14 MBps) [2024-11-20T21:08:49.203Z] Copying: 361/1024 [MB] (19 MBps) [2024-11-20T21:08:50.146Z] Copying: 397/1024 [MB] (36 MBps) [2024-11-20T21:08:51.089Z] Copying: 424/1024 [MB] (26 MBps) [2024-11-20T21:08:52.032Z] Copying: 459/1024 [MB] (35 MBps) [2024-11-20T21:08:53.421Z] Copying: 479/1024 [MB] (19 MBps) [2024-11-20T21:08:54.363Z] Copying: 493/1024 [MB] (14 MBps) [2024-11-20T21:08:55.306Z] Copying: 506/1024 [MB] (12 MBps) [2024-11-20T21:08:56.250Z] Copying: 523/1024 [MB] (17 MBps) [2024-11-20T21:08:57.193Z] Copying: 553/1024 [MB] (30 MBps) [2024-11-20T21:08:58.136Z] Copying: 581/1024 [MB] (27 MBps) [2024-11-20T21:08:59.079Z] Copying: 596/1024 [MB] (15 MBps) [2024-11-20T21:09:00.023Z] Copying: 623/1024 [MB] (26 MBps) [2024-11-20T21:09:01.409Z] Copying: 645/1024 [MB] (21 MBps) [2024-11-20T21:09:02.353Z] Copying: 665/1024 [MB] (20 MBps) [2024-11-20T21:09:03.298Z] Copying: 684/1024 [MB] (18 MBps) [2024-11-20T21:09:04.243Z] Copying: 694/1024 [MB] (10 MBps) [2024-11-20T21:09:05.187Z] Copying: 710/1024 [MB] (15 MBps) [2024-11-20T21:09:06.132Z] Copying: 727/1024 [MB] (16 MBps) [2024-11-20T21:09:07.076Z] Copying: 738/1024 [MB] (11 MBps) [2024-11-20T21:09:08.020Z] Copying: 755/1024 [MB] (16 MBps) [2024-11-20T21:09:09.405Z] Copying: 771/1024 [MB] (16 MBps) [2024-11-20T21:09:10.346Z] Copying: 791/1024 [MB] (19 MBps) [2024-11-20T21:09:11.289Z] Copying: 808/1024 [MB] (16 MBps) [2024-11-20T21:09:12.231Z] Copying: 822/1024 [MB] (13 MBps) [2024-11-20T21:09:13.173Z] Copying: 838/1024 [MB] (16 MBps) [2024-11-20T21:09:14.114Z] Copying: 853/1024 [MB] (14 MBps) [2024-11-20T21:09:15.055Z] Copying: 867/1024 [MB] (13 MBps) [2024-11-20T21:09:16.057Z] Copying: 882/1024 [MB] (14 MBps) [2024-11-20T21:09:17.073Z] Copying: 892/1024 [MB] (10 MBps) [2024-11-20T21:09:18.028Z] Copying: 902/1024 [MB] (10 MBps) [2024-11-20T21:09:19.416Z] Copying: 913/1024 [MB] (10 MBps) [2024-11-20T21:09:20.360Z] Copying: 923/1024 [MB] (10 MBps) [2024-11-20T21:09:21.303Z] Copying: 933/1024 [MB] (10 MBps) [2024-11-20T21:09:22.248Z] Copying: 946/1024 [MB] (13 MBps) [2024-11-20T21:09:23.194Z] Copying: 965/1024 [MB] (18 MBps) [2024-11-20T21:09:24.137Z] Copying: 978/1024 [MB] (13 MBps) [2024-11-20T21:09:25.082Z] Copying: 992/1024 [MB] (14 MBps) [2024-11-20T21:09:26.029Z] Copying: 1010/1024 [MB] (17 MBps) [2024-11-20T21:09:26.975Z] Copying: 1023/1024 [MB] (13 MBps) [2024-11-20T21:09:26.975Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-20 21:09:26.819912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.856 [2024-11-20 21:09:26.819999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:08.856 [2024-11-20 21:09:26.820016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:08.856 [2024-11-20 21:09:26.820026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-11-20 21:09:26.821214] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:08.856 [2024-11-20 21:09:26.823090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.856 [2024-11-20 21:09:26.823144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:08.856 [2024-11-20 21:09:26.823157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:27:08.856 [2024-11-20 21:09:26.823167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-11-20 21:09:26.837232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.856 [2024-11-20 21:09:26.837296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:08.856 [2024-11-20 21:09:26.837312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.801 ms 00:27:08.856 [2024-11-20 21:09:26.837321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-11-20 21:09:26.860543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.856 [2024-11-20 21:09:26.860611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:08.856 [2024-11-20 21:09:26.860623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.203 ms 00:27:08.856 [2024-11-20 21:09:26.860633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-11-20 21:09:26.866801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.856 [2024-11-20 21:09:26.866846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:08.856 [2024-11-20 21:09:26.866860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.128 ms 00:27:08.856 [2024-11-20 21:09:26.866870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-11-20 21:09:26.869644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.856 [2024-11-20 21:09:26.869695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:08.856 [2024-11-20 21:09:26.869706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.718 ms 00:27:08.856 [2024-11-20 21:09:26.869714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-11-20 21:09:26.874514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.856 [2024-11-20 21:09:26.874577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:08.856 [2024-11-20 21:09:26.874589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.742 ms 00:27:08.856 [2024-11-20 21:09:26.874597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.118 [2024-11-20 21:09:27.009537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.118 [2024-11-20 21:09:27.009609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:09.118 [2024-11-20 21:09:27.009623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 134.891 ms 00:27:09.118 [2024-11-20 21:09:27.009632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.118 [2024-11-20 21:09:27.012320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.118 [2024-11-20 21:09:27.012374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:09.118 [2024-11-20 21:09:27.012385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.671 ms 00:27:09.118 [2024-11-20 21:09:27.012394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.118 [2024-11-20 21:09:27.014575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.118 [2024-11-20 21:09:27.014628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:09.118 [2024-11-20 21:09:27.014638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:27:09.118 [2024-11-20 21:09:27.014645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.118 [2024-11-20 21:09:27.016818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.118 [2024-11-20 21:09:27.016863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:09.118 [2024-11-20 21:09:27.016873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.131 ms 00:27:09.118 [2024-11-20 21:09:27.016880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.118 [2024-11-20 21:09:27.019283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.118 [2024-11-20 21:09:27.019337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:09.118 [2024-11-20 21:09:27.019348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.333 ms 00:27:09.118 [2024-11-20 21:09:27.019355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.118 [2024-11-20 21:09:27.019393] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:09.118 [2024-11-20 21:09:27.019407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 97024 / 261120 wr_cnt: 1 state: open 00:27:09.118 [2024-11-20 21:09:27.019432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:09.118 [2024-11-20 21:09:27.019657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.019998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:09.119 [2024-11-20 21:09:27.020250] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:09.119 [2024-11-20 21:09:27.020258] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c9ab7ca-d499-4cca-a63d-6591be32da33 00:27:09.119 [2024-11-20 21:09:27.020274] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 97024 00:27:09.119 [2024-11-20 21:09:27.020282] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 97984 00:27:09.119 [2024-11-20 21:09:27.020289] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 97024 00:27:09.119 [2024-11-20 21:09:27.020298] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:27:09.119 [2024-11-20 21:09:27.020305] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:09.119 [2024-11-20 21:09:27.020317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:09.119 [2024-11-20 21:09:27.020324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:09.119 [2024-11-20 21:09:27.020331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:09.119 [2024-11-20 21:09:27.020338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:09.119 [2024-11-20 21:09:27.020346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.119 [2024-11-20 21:09:27.020354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:09.119 [2024-11-20 21:09:27.020363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:27:09.119 [2024-11-20 21:09:27.020374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.119 [2024-11-20 21:09:27.022720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.119 [2024-11-20 21:09:27.022781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:09.119 [2024-11-20 21:09:27.022794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.325 ms 00:27:09.119 [2024-11-20 21:09:27.022803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.119 [2024-11-20 21:09:27.022920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.119 [2024-11-20 21:09:27.022943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:09.119 [2024-11-20 21:09:27.022953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:27:09.119 [2024-11-20 21:09:27.022961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.119 [2024-11-20 21:09:27.030531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.119 [2024-11-20 21:09:27.030584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:09.120 [2024-11-20 21:09:27.030596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.030604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.030669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.030681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:09.120 [2024-11-20 21:09:27.030689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.030701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.030767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.030778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:09.120 [2024-11-20 21:09:27.030787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.030794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.030810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.030818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:09.120 [2024-11-20 21:09:27.030829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.030836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.043672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.043721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:09.120 [2024-11-20 21:09:27.043731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.043738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.052869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.052919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:09.120 [2024-11-20 21:09:27.052929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.052936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.052982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.052990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:09.120 [2024-11-20 21:09:27.052997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.053010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.053031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.053038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:09.120 [2024-11-20 21:09:27.053045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.053054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.053114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.053127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:09.120 [2024-11-20 21:09:27.053134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.053140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.053168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.053176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:09.120 [2024-11-20 21:09:27.053183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.053189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.053230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.053240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:09.120 [2024-11-20 21:09:27.053249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.053256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.053295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.120 [2024-11-20 21:09:27.053310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:09.120 [2024-11-20 21:09:27.053317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.120 [2024-11-20 21:09:27.053327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.120 [2024-11-20 21:09:27.053439] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 237.247 ms, result 0 00:27:09.692 00:27:09.692 00:27:09.692 21:09:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:12.242 21:09:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:12.242 [2024-11-20 21:09:29.792694] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:27:12.242 [2024-11-20 21:09:29.792827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92503 ] 00:27:12.242 [2024-11-20 21:09:29.935262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.242 [2024-11-20 21:09:29.956610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.242 [2024-11-20 21:09:30.044154] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:12.242 [2024-11-20 21:09:30.044201] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:12.242 [2024-11-20 21:09:30.197506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.242 [2024-11-20 21:09:30.197542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:12.242 [2024-11-20 21:09:30.197553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:12.242 [2024-11-20 21:09:30.197559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.242 [2024-11-20 21:09:30.197592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.242 [2024-11-20 21:09:30.197599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:12.242 [2024-11-20 21:09:30.197605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:27:12.242 [2024-11-20 21:09:30.197611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.242 [2024-11-20 21:09:30.197627] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:12.242 [2024-11-20 21:09:30.197815] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:12.242 [2024-11-20 21:09:30.197827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.242 [2024-11-20 21:09:30.197836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:12.242 [2024-11-20 21:09:30.197842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:27:12.242 [2024-11-20 21:09:30.197850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.242 [2024-11-20 21:09:30.199000] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:12.242 [2024-11-20 21:09:30.201249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.242 [2024-11-20 21:09:30.201278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:12.242 [2024-11-20 21:09:30.201287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.253 ms 00:27:12.242 [2024-11-20 21:09:30.201293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.242 [2024-11-20 21:09:30.201346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.242 [2024-11-20 21:09:30.201354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:12.242 [2024-11-20 21:09:30.201364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:12.242 [2024-11-20 21:09:30.201369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.242 [2024-11-20 21:09:30.205670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.242 [2024-11-20 21:09:30.205693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:12.242 [2024-11-20 21:09:30.205704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.259 ms 00:27:12.242 [2024-11-20 21:09:30.205715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.242 [2024-11-20 21:09:30.205793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.242 [2024-11-20 21:09:30.205801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:12.242 [2024-11-20 21:09:30.205807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:27:12.242 [2024-11-20 21:09:30.205813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.242 [2024-11-20 21:09:30.205846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.242 [2024-11-20 21:09:30.205855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:12.242 [2024-11-20 21:09:30.205864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:12.243 [2024-11-20 21:09:30.205870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.243 [2024-11-20 21:09:30.205889] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:12.243 [2024-11-20 21:09:30.207049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.243 [2024-11-20 21:09:30.207070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:12.243 [2024-11-20 21:09:30.207077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.164 ms 00:27:12.243 [2024-11-20 21:09:30.207083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.243 [2024-11-20 21:09:30.207105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.243 [2024-11-20 21:09:30.207111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:12.243 [2024-11-20 21:09:30.207117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:12.243 [2024-11-20 21:09:30.207127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.243 [2024-11-20 21:09:30.207144] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:12.243 [2024-11-20 21:09:30.207159] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:12.243 [2024-11-20 21:09:30.207192] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:12.243 [2024-11-20 21:09:30.207204] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:12.243 [2024-11-20 21:09:30.207286] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:12.243 [2024-11-20 21:09:30.207298] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:12.243 [2024-11-20 21:09:30.207306] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:12.243 [2024-11-20 21:09:30.207316] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207322] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207331] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:12.243 [2024-11-20 21:09:30.207337] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:12.243 [2024-11-20 21:09:30.207342] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:12.243 [2024-11-20 21:09:30.207348] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:12.243 [2024-11-20 21:09:30.207354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.243 [2024-11-20 21:09:30.207362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:12.243 [2024-11-20 21:09:30.207367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:27:12.243 [2024-11-20 21:09:30.207375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.243 [2024-11-20 21:09:30.207437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.243 [2024-11-20 21:09:30.207449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:12.243 [2024-11-20 21:09:30.207456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:12.243 [2024-11-20 21:09:30.207462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.243 [2024-11-20 21:09:30.207539] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:12.243 [2024-11-20 21:09:30.207547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:12.243 [2024-11-20 21:09:30.207553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:12.243 [2024-11-20 21:09:30.207571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:12.243 [2024-11-20 21:09:30.207590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:12.243 [2024-11-20 21:09:30.207601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:12.243 [2024-11-20 21:09:30.207606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:12.243 [2024-11-20 21:09:30.207611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:12.243 [2024-11-20 21:09:30.207616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:12.243 [2024-11-20 21:09:30.207624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:12.243 [2024-11-20 21:09:30.207630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:12.243 [2024-11-20 21:09:30.207641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:12.243 [2024-11-20 21:09:30.207657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:12.243 [2024-11-20 21:09:30.207673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:12.243 [2024-11-20 21:09:30.207687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:12.243 [2024-11-20 21:09:30.207702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:12.243 [2024-11-20 21:09:30.207722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:12.243 [2024-11-20 21:09:30.207734] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:12.243 [2024-11-20 21:09:30.207739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:12.243 [2024-11-20 21:09:30.207764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:12.243 [2024-11-20 21:09:30.207771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:12.243 [2024-11-20 21:09:30.207777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:12.243 [2024-11-20 21:09:30.207782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:12.243 [2024-11-20 21:09:30.207795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:12.243 [2024-11-20 21:09:30.207801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207807] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:12.243 [2024-11-20 21:09:30.207815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:12.243 [2024-11-20 21:09:30.207823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.243 [2024-11-20 21:09:30.207838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:12.243 [2024-11-20 21:09:30.207844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:12.243 [2024-11-20 21:09:30.207850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:12.243 [2024-11-20 21:09:30.207856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:12.243 [2024-11-20 21:09:30.207861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:12.243 [2024-11-20 21:09:30.207867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:12.243 [2024-11-20 21:09:30.207874] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:12.243 [2024-11-20 21:09:30.207881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:12.244 [2024-11-20 21:09:30.207888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:12.244 [2024-11-20 21:09:30.207902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:12.244 [2024-11-20 21:09:30.207908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:12.244 [2024-11-20 21:09:30.207914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:12.244 [2024-11-20 21:09:30.207921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:12.244 [2024-11-20 21:09:30.207928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:12.244 [2024-11-20 21:09:30.207934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:12.244 [2024-11-20 21:09:30.207942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:12.244 [2024-11-20 21:09:30.207948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:12.244 [2024-11-20 21:09:30.207954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:12.244 [2024-11-20 21:09:30.207960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:12.244 [2024-11-20 21:09:30.207970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:12.244 [2024-11-20 21:09:30.207977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:12.244 [2024-11-20 21:09:30.207984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:12.244 [2024-11-20 21:09:30.207990] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:12.244 [2024-11-20 21:09:30.207997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:12.244 [2024-11-20 21:09:30.208006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:12.244 [2024-11-20 21:09:30.208016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:12.244 [2024-11-20 21:09:30.208022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:12.244 [2024-11-20 21:09:30.208028] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:12.244 [2024-11-20 21:09:30.208034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.208040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:12.244 [2024-11-20 21:09:30.208047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:27:12.244 [2024-11-20 21:09:30.208055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.215807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.215834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:12.244 [2024-11-20 21:09:30.215842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.719 ms 00:27:12.244 [2024-11-20 21:09:30.215848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.215919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.215925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:12.244 [2024-11-20 21:09:30.215936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:12.244 [2024-11-20 21:09:30.215944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.232643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.232676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:12.244 [2024-11-20 21:09:30.232685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.656 ms 00:27:12.244 [2024-11-20 21:09:30.232692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.232722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.232733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:12.244 [2024-11-20 21:09:30.232740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:12.244 [2024-11-20 21:09:30.232758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.233063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.233083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:12.244 [2024-11-20 21:09:30.233098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:27:12.244 [2024-11-20 21:09:30.233104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.233203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.233212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:12.244 [2024-11-20 21:09:30.233218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:27:12.244 [2024-11-20 21:09:30.233225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.238693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.238741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:12.244 [2024-11-20 21:09:30.238770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.450 ms 00:27:12.244 [2024-11-20 21:09:30.238781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.241802] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:12.244 [2024-11-20 21:09:30.241840] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:12.244 [2024-11-20 21:09:30.241856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.241868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:12.244 [2024-11-20 21:09:30.241883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:27:12.244 [2024-11-20 21:09:30.241894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.254877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.254908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:12.244 [2024-11-20 21:09:30.254916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.937 ms 00:27:12.244 [2024-11-20 21:09:30.254922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.256605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.256630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:12.244 [2024-11-20 21:09:30.256637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:27:12.244 [2024-11-20 21:09:30.256643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.258067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.258091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:12.244 [2024-11-20 21:09:30.258099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:27:12.244 [2024-11-20 21:09:30.258104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.258347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.258357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:12.244 [2024-11-20 21:09:30.258365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:27:12.244 [2024-11-20 21:09:30.258371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.272949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.272993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:12.244 [2024-11-20 21:09:30.273004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.566 ms 00:27:12.244 [2024-11-20 21:09:30.273013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.244 [2024-11-20 21:09:30.278704] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:12.244 [2024-11-20 21:09:30.280538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.244 [2024-11-20 21:09:30.280563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:12.244 [2024-11-20 21:09:30.280575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.495 ms 00:27:12.244 [2024-11-20 21:09:30.280582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.245 [2024-11-20 21:09:30.280623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.245 [2024-11-20 21:09:30.280630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:12.245 [2024-11-20 21:09:30.280637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:12.245 [2024-11-20 21:09:30.280645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.245 [2024-11-20 21:09:30.281720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.245 [2024-11-20 21:09:30.281764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:12.245 [2024-11-20 21:09:30.281772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:27:12.245 [2024-11-20 21:09:30.281781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.245 [2024-11-20 21:09:30.281801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.245 [2024-11-20 21:09:30.281808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:12.245 [2024-11-20 21:09:30.281814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:12.245 [2024-11-20 21:09:30.281821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.245 [2024-11-20 21:09:30.281849] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:12.245 [2024-11-20 21:09:30.281856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.245 [2024-11-20 21:09:30.281862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:12.245 [2024-11-20 21:09:30.281871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:12.245 [2024-11-20 21:09:30.281877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.245 [2024-11-20 21:09:30.285270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.245 [2024-11-20 21:09:30.285298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:12.245 [2024-11-20 21:09:30.285306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:27:12.245 [2024-11-20 21:09:30.285312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.245 [2024-11-20 21:09:30.285366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.245 [2024-11-20 21:09:30.285375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:12.245 [2024-11-20 21:09:30.285382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:12.245 [2024-11-20 21:09:30.285388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.245 [2024-11-20 21:09:30.286210] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 88.364 ms, result 0 00:27:13.633  [2024-11-20T21:09:32.698Z] Copying: 1148/1048576 [kB] (1148 kBps) [2024-11-20T21:09:33.643Z] Copying: 4304/1048576 [kB] (3156 kBps) [2024-11-20T21:09:34.588Z] Copying: 14412/1048576 [kB] (10108 kBps) [2024-11-20T21:09:35.533Z] Copying: 39/1024 [MB] (25 MBps) [2024-11-20T21:09:36.477Z] Copying: 55/1024 [MB] (16 MBps) [2024-11-20T21:09:37.863Z] Copying: 71/1024 [MB] (16 MBps) [2024-11-20T21:09:38.808Z] Copying: 100/1024 [MB] (28 MBps) [2024-11-20T21:09:39.749Z] Copying: 118/1024 [MB] (18 MBps) [2024-11-20T21:09:40.692Z] Copying: 136/1024 [MB] (17 MBps) [2024-11-20T21:09:41.636Z] Copying: 152/1024 [MB] (15 MBps) [2024-11-20T21:09:42.579Z] Copying: 168/1024 [MB] (15 MBps) [2024-11-20T21:09:43.524Z] Copying: 190/1024 [MB] (22 MBps) [2024-11-20T21:09:44.911Z] Copying: 210/1024 [MB] (19 MBps) [2024-11-20T21:09:45.484Z] Copying: 229/1024 [MB] (18 MBps) [2024-11-20T21:09:46.868Z] Copying: 263/1024 [MB] (33 MBps) [2024-11-20T21:09:47.846Z] Copying: 278/1024 [MB] (15 MBps) [2024-11-20T21:09:48.799Z] Copying: 299/1024 [MB] (20 MBps) [2024-11-20T21:09:49.744Z] Copying: 316/1024 [MB] (16 MBps) [2024-11-20T21:09:50.689Z] Copying: 335/1024 [MB] (18 MBps) [2024-11-20T21:09:51.634Z] Copying: 356/1024 [MB] (21 MBps) [2024-11-20T21:09:52.581Z] Copying: 377/1024 [MB] (21 MBps) [2024-11-20T21:09:53.526Z] Copying: 393/1024 [MB] (15 MBps) [2024-11-20T21:09:54.916Z] Copying: 410/1024 [MB] (16 MBps) [2024-11-20T21:09:55.489Z] Copying: 426/1024 [MB] (16 MBps) [2024-11-20T21:09:56.881Z] Copying: 453/1024 [MB] (27 MBps) [2024-11-20T21:09:57.826Z] Copying: 476/1024 [MB] (22 MBps) [2024-11-20T21:09:58.771Z] Copying: 518/1024 [MB] (42 MBps) [2024-11-20T21:09:59.716Z] Copying: 537/1024 [MB] (18 MBps) [2024-11-20T21:10:00.661Z] Copying: 554/1024 [MB] (17 MBps) [2024-11-20T21:10:01.604Z] Copying: 576/1024 [MB] (21 MBps) [2024-11-20T21:10:02.551Z] Copying: 614/1024 [MB] (37 MBps) [2024-11-20T21:10:03.495Z] Copying: 630/1024 [MB] (16 MBps) [2024-11-20T21:10:04.883Z] Copying: 653/1024 [MB] (23 MBps) [2024-11-20T21:10:05.828Z] Copying: 690/1024 [MB] (37 MBps) [2024-11-20T21:10:06.774Z] Copying: 716/1024 [MB] (25 MBps) [2024-11-20T21:10:07.719Z] Copying: 731/1024 [MB] (15 MBps) [2024-11-20T21:10:08.664Z] Copying: 753/1024 [MB] (21 MBps) [2024-11-20T21:10:09.608Z] Copying: 770/1024 [MB] (16 MBps) [2024-11-20T21:10:10.554Z] Copying: 793/1024 [MB] (23 MBps) [2024-11-20T21:10:11.499Z] Copying: 815/1024 [MB] (21 MBps) [2024-11-20T21:10:12.887Z] Copying: 839/1024 [MB] (23 MBps) [2024-11-20T21:10:13.833Z] Copying: 866/1024 [MB] (27 MBps) [2024-11-20T21:10:14.778Z] Copying: 888/1024 [MB] (22 MBps) [2024-11-20T21:10:15.725Z] Copying: 905/1024 [MB] (16 MBps) [2024-11-20T21:10:16.674Z] Copying: 937/1024 [MB] (32 MBps) [2024-11-20T21:10:17.630Z] Copying: 954/1024 [MB] (16 MBps) [2024-11-20T21:10:18.577Z] Copying: 986/1024 [MB] (32 MBps) [2024-11-20T21:10:19.155Z] Copying: 1011/1024 [MB] (24 MBps) [2024-11-20T21:10:19.492Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-20 21:10:19.179483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.179701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:01.373 [2024-11-20 21:10:19.179725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:01.373 [2024-11-20 21:10:19.179741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.179809] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:01.373 [2024-11-20 21:10:19.180669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.180721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:01.373 [2024-11-20 21:10:19.180739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:28:01.373 [2024-11-20 21:10:19.180788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.181175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.181202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:01.373 [2024-11-20 21:10:19.181218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:28:01.373 [2024-11-20 21:10:19.181231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.193023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.193072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:01.373 [2024-11-20 21:10:19.193082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.765 ms 00:28:01.373 [2024-11-20 21:10:19.193094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.197883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.197911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:01.373 [2024-11-20 21:10:19.197927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:28:01.373 [2024-11-20 21:10:19.197934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.199203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.199237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:01.373 [2024-11-20 21:10:19.199246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:28:01.373 [2024-11-20 21:10:19.199252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.203056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.203096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:01.373 [2024-11-20 21:10:19.203106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.773 ms 00:28:01.373 [2024-11-20 21:10:19.203112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.205328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.205360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:01.373 [2024-11-20 21:10:19.205368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:28:01.373 [2024-11-20 21:10:19.205375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.207370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.207411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:01.373 [2024-11-20 21:10:19.207419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.982 ms 00:28:01.373 [2024-11-20 21:10:19.207425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.208741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.208787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:01.373 [2024-11-20 21:10:19.208795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:28:01.373 [2024-11-20 21:10:19.208801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.209996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.210025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:01.373 [2024-11-20 21:10:19.210033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:28:01.373 [2024-11-20 21:10:19.210038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.211291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.373 [2024-11-20 21:10:19.211322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:01.373 [2024-11-20 21:10:19.211331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:28:01.373 [2024-11-20 21:10:19.211337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.373 [2024-11-20 21:10:19.211366] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:01.373 [2024-11-20 21:10:19.211379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:01.373 [2024-11-20 21:10:19.211389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:01.373 [2024-11-20 21:10:19.211396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:01.373 [2024-11-20 21:10:19.211404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:01.373 [2024-11-20 21:10:19.211411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:01.373 [2024-11-20 21:10:19.211418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:01.373 [2024-11-20 21:10:19.211425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:01.373 [2024-11-20 21:10:19.211432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:01.373 [2024-11-20 21:10:19.211439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:01.373 [2024-11-20 21:10:19.211446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:01.373 [2024-11-20 21:10:19.211453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.211995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.212001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.212007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.212013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.212019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.212025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.212031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.212037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.212042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.212049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:01.374 [2024-11-20 21:10:19.212055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:01.375 [2024-11-20 21:10:19.212068] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:01.375 [2024-11-20 21:10:19.212085] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c9ab7ca-d499-4cca-a63d-6591be32da33 00:28:01.375 [2024-11-20 21:10:19.212094] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:01.375 [2024-11-20 21:10:19.212103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 167616 00:28:01.375 [2024-11-20 21:10:19.212109] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 165632 00:28:01.375 [2024-11-20 21:10:19.212116] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0120 00:28:01.375 [2024-11-20 21:10:19.212124] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:01.375 [2024-11-20 21:10:19.212131] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:01.375 [2024-11-20 21:10:19.212140] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:01.375 [2024-11-20 21:10:19.212145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:01.375 [2024-11-20 21:10:19.212150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:01.375 [2024-11-20 21:10:19.212157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.375 [2024-11-20 21:10:19.212173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:01.375 [2024-11-20 21:10:19.212180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:28:01.375 [2024-11-20 21:10:19.212189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.213766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.375 [2024-11-20 21:10:19.213797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:01.375 [2024-11-20 21:10:19.213805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.562 ms 00:28:01.375 [2024-11-20 21:10:19.213811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.213908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:01.375 [2024-11-20 21:10:19.213917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:01.375 [2024-11-20 21:10:19.213924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:01.375 [2024-11-20 21:10:19.213934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.219034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.219066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:01.375 [2024-11-20 21:10:19.219074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.219080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.219124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.219131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:01.375 [2024-11-20 21:10:19.219138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.219148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.219182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.219190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:01.375 [2024-11-20 21:10:19.219196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.219202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.219214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.219221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:01.375 [2024-11-20 21:10:19.219227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.219236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.228423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.228457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:01.375 [2024-11-20 21:10:19.228465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.228471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.235731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.235778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:01.375 [2024-11-20 21:10:19.235787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.235798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.235834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.235842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:01.375 [2024-11-20 21:10:19.235848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.235854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.235878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.235884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:01.375 [2024-11-20 21:10:19.235891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.235896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.235955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.235962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:01.375 [2024-11-20 21:10:19.235972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.235977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.236000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.236008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:01.375 [2024-11-20 21:10:19.236014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.236020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.236051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.236057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:01.375 [2024-11-20 21:10:19.236063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.236069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.236103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:01.375 [2024-11-20 21:10:19.236111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:01.375 [2024-11-20 21:10:19.236117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:01.375 [2024-11-20 21:10:19.236123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:01.375 [2024-11-20 21:10:19.236227] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.738 ms, result 0 00:28:01.375 00:28:01.375 00:28:01.654 21:10:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:03.564 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:03.564 21:10:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:03.564 [2024-11-20 21:10:21.464798] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:28:03.564 [2024-11-20 21:10:21.464896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93034 ] 00:28:03.564 [2024-11-20 21:10:21.607863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.564 [2024-11-20 21:10:21.629178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.826 [2024-11-20 21:10:21.735338] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.826 [2024-11-20 21:10:21.735425] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:03.826 [2024-11-20 21:10:21.896397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.896459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:03.826 [2024-11-20 21:10:21.896478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:03.826 [2024-11-20 21:10:21.896487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.896546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.896557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:03.826 [2024-11-20 21:10:21.896566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:03.826 [2024-11-20 21:10:21.896578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.896604] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:03.826 [2024-11-20 21:10:21.897240] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:03.826 [2024-11-20 21:10:21.897296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.897306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:03.826 [2024-11-20 21:10:21.897317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:28:03.826 [2024-11-20 21:10:21.897329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.899649] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:03.826 [2024-11-20 21:10:21.903291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.903347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:03.826 [2024-11-20 21:10:21.903360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.645 ms 00:28:03.826 [2024-11-20 21:10:21.903376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.903457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.903468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:03.826 [2024-11-20 21:10:21.903477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:03.826 [2024-11-20 21:10:21.903485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.911382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.911430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:03.826 [2024-11-20 21:10:21.911440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.850 ms 00:28:03.826 [2024-11-20 21:10:21.911455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.911559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.911573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:03.826 [2024-11-20 21:10:21.911587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:03.826 [2024-11-20 21:10:21.911600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.911661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.911675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:03.826 [2024-11-20 21:10:21.911684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:03.826 [2024-11-20 21:10:21.911692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.911718] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:03.826 [2024-11-20 21:10:21.913699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.913760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:03.826 [2024-11-20 21:10:21.913771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.989 ms 00:28:03.826 [2024-11-20 21:10:21.913779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.913812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.913825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:03.826 [2024-11-20 21:10:21.913838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:03.826 [2024-11-20 21:10:21.913846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.913870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:03.826 [2024-11-20 21:10:21.913890] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:03.826 [2024-11-20 21:10:21.913931] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:03.826 [2024-11-20 21:10:21.913947] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:03.826 [2024-11-20 21:10:21.914076] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:03.826 [2024-11-20 21:10:21.914088] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:03.826 [2024-11-20 21:10:21.914099] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:03.826 [2024-11-20 21:10:21.914112] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:03.826 [2024-11-20 21:10:21.914125] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:03.826 [2024-11-20 21:10:21.914133] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:03.826 [2024-11-20 21:10:21.914140] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:03.826 [2024-11-20 21:10:21.914152] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:03.826 [2024-11-20 21:10:21.914160] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:03.826 [2024-11-20 21:10:21.914167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.914177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:03.826 [2024-11-20 21:10:21.914185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:28:03.826 [2024-11-20 21:10:21.914192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.914274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.826 [2024-11-20 21:10:21.914286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:03.826 [2024-11-20 21:10:21.914293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:03.826 [2024-11-20 21:10:21.914301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.826 [2024-11-20 21:10:21.914409] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:03.826 [2024-11-20 21:10:21.914428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:03.826 [2024-11-20 21:10:21.914438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:03.826 [2024-11-20 21:10:21.914447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.826 [2024-11-20 21:10:21.914455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:03.826 [2024-11-20 21:10:21.914463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:03.826 [2024-11-20 21:10:21.914471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:03.826 [2024-11-20 21:10:21.914479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:03.826 [2024-11-20 21:10:21.914487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:03.826 [2024-11-20 21:10:21.914500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:03.826 [2024-11-20 21:10:21.914508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:03.826 [2024-11-20 21:10:21.914520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:03.826 [2024-11-20 21:10:21.914528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:03.826 [2024-11-20 21:10:21.914536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:03.826 [2024-11-20 21:10:21.914545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:03.826 [2024-11-20 21:10:21.914553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.826 [2024-11-20 21:10:21.914561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:03.826 [2024-11-20 21:10:21.914570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:03.826 [2024-11-20 21:10:21.914578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.826 [2024-11-20 21:10:21.914585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:03.826 [2024-11-20 21:10:21.914593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:03.826 [2024-11-20 21:10:21.914602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.826 [2024-11-20 21:10:21.914610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:03.826 [2024-11-20 21:10:21.914618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:03.826 [2024-11-20 21:10:21.914625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.827 [2024-11-20 21:10:21.914640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:03.827 [2024-11-20 21:10:21.914648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:03.827 [2024-11-20 21:10:21.914655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.827 [2024-11-20 21:10:21.914663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:03.827 [2024-11-20 21:10:21.914671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:03.827 [2024-11-20 21:10:21.914678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:03.827 [2024-11-20 21:10:21.914686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:03.827 [2024-11-20 21:10:21.914694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:03.827 [2024-11-20 21:10:21.914701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:03.827 [2024-11-20 21:10:21.914709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:03.827 [2024-11-20 21:10:21.914717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:03.827 [2024-11-20 21:10:21.914726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:03.827 [2024-11-20 21:10:21.914733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:03.827 [2024-11-20 21:10:21.914741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:03.827 [2024-11-20 21:10:21.914765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.827 [2024-11-20 21:10:21.914772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:03.827 [2024-11-20 21:10:21.914783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:03.827 [2024-11-20 21:10:21.914790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.827 [2024-11-20 21:10:21.914797] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:03.827 [2024-11-20 21:10:21.914806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:03.827 [2024-11-20 21:10:21.914817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:03.827 [2024-11-20 21:10:21.914825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:03.827 [2024-11-20 21:10:21.914833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:03.827 [2024-11-20 21:10:21.914840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:03.827 [2024-11-20 21:10:21.914847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:03.827 [2024-11-20 21:10:21.914855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:03.827 [2024-11-20 21:10:21.914863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:03.827 [2024-11-20 21:10:21.914870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:03.827 [2024-11-20 21:10:21.914879] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:03.827 [2024-11-20 21:10:21.914889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.827 [2024-11-20 21:10:21.914897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:03.827 [2024-11-20 21:10:21.914905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:03.827 [2024-11-20 21:10:21.914915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:03.827 [2024-11-20 21:10:21.914923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:03.827 [2024-11-20 21:10:21.914930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:03.827 [2024-11-20 21:10:21.914937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:03.827 [2024-11-20 21:10:21.914944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:03.827 [2024-11-20 21:10:21.914951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:03.827 [2024-11-20 21:10:21.914958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:03.827 [2024-11-20 21:10:21.914967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:03.827 [2024-11-20 21:10:21.914974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:03.827 [2024-11-20 21:10:21.914987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:03.827 [2024-11-20 21:10:21.914994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:03.827 [2024-11-20 21:10:21.915002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:03.827 [2024-11-20 21:10:21.915009] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:03.827 [2024-11-20 21:10:21.915022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.827 [2024-11-20 21:10:21.915030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:03.827 [2024-11-20 21:10:21.915038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:03.827 [2024-11-20 21:10:21.915047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:03.827 [2024-11-20 21:10:21.915055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:03.827 [2024-11-20 21:10:21.915065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.827 [2024-11-20 21:10:21.915073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:03.827 [2024-11-20 21:10:21.915081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:28:03.827 [2024-11-20 21:10:21.915089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.827 [2024-11-20 21:10:21.930007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.827 [2024-11-20 21:10:21.930050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:03.827 [2024-11-20 21:10:21.930069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.869 ms 00:28:03.827 [2024-11-20 21:10:21.930079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:03.827 [2024-11-20 21:10:21.930168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:03.827 [2024-11-20 21:10:21.930176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:03.827 [2024-11-20 21:10:21.930184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:03.827 [2024-11-20 21:10:21.930192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-11-20 21:10:21.958853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-11-20 21:10:21.958947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:04.087 [2024-11-20 21:10:21.958979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.600 ms 00:28:04.087 [2024-11-20 21:10:21.959001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-11-20 21:10:21.959100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.087 [2024-11-20 21:10:21.959125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:04.087 [2024-11-20 21:10:21.959159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:04.087 [2024-11-20 21:10:21.959191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.087 [2024-11-20 21:10:21.959986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:21.960056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:04.088 [2024-11-20 21:10:21.960081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:28:04.088 [2024-11-20 21:10:21.960102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:21.960449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:21.960494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:04.088 [2024-11-20 21:10:21.960518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:28:04.088 [2024-11-20 21:10:21.960538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:21.969148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:21.969200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:04.088 [2024-11-20 21:10:21.969211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.564 ms 00:28:04.088 [2024-11-20 21:10:21.969224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:21.973010] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:04.088 [2024-11-20 21:10:21.973061] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:04.088 [2024-11-20 21:10:21.973073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:21.973082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:04.088 [2024-11-20 21:10:21.973092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.742 ms 00:28:04.088 [2024-11-20 21:10:21.973099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:21.989113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:21.989169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:04.088 [2024-11-20 21:10:21.989181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.963 ms 00:28:04.088 [2024-11-20 21:10:21.989189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:21.992042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:21.992092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:04.088 [2024-11-20 21:10:21.992103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.795 ms 00:28:04.088 [2024-11-20 21:10:21.992110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:21.994810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:21.994855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:04.088 [2024-11-20 21:10:21.994874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.657 ms 00:28:04.088 [2024-11-20 21:10:21.994881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:21.995216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:21.995235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:04.088 [2024-11-20 21:10:21.995245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:28:04.088 [2024-11-20 21:10:21.995253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:22.019131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:22.019194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:04.088 [2024-11-20 21:10:22.019206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.858 ms 00:28:04.088 [2024-11-20 21:10:22.019216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:22.027242] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:04.088 [2024-11-20 21:10:22.030228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:22.030284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:04.088 [2024-11-20 21:10:22.030296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.963 ms 00:28:04.088 [2024-11-20 21:10:22.030304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:22.030386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:22.030397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:04.088 [2024-11-20 21:10:22.030406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:28:04.088 [2024-11-20 21:10:22.030415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:22.031266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:22.031310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:04.088 [2024-11-20 21:10:22.031325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:28:04.088 [2024-11-20 21:10:22.031332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:22.031364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:22.031372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:04.088 [2024-11-20 21:10:22.031380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:04.088 [2024-11-20 21:10:22.031392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:22.031430] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:04.088 [2024-11-20 21:10:22.031445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:22.031456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:04.088 [2024-11-20 21:10:22.031466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:04.088 [2024-11-20 21:10:22.031477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:22.037220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:22.037270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:04.088 [2024-11-20 21:10:22.037291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.723 ms 00:28:04.088 [2024-11-20 21:10:22.037300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:22.037385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:04.088 [2024-11-20 21:10:22.037396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:04.088 [2024-11-20 21:10:22.037405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:28:04.088 [2024-11-20 21:10:22.037413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:04.088 [2024-11-20 21:10:22.038893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 141.998 ms, result 0 00:28:05.476  [2024-11-20T21:10:24.543Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-20T21:10:25.487Z] Copying: 27/1024 [MB] (15 MBps) [2024-11-20T21:10:26.432Z] Copying: 42/1024 [MB] (14 MBps) [2024-11-20T21:10:27.375Z] Copying: 54/1024 [MB] (12 MBps) [2024-11-20T21:10:28.319Z] Copying: 71/1024 [MB] (16 MBps) [2024-11-20T21:10:29.264Z] Copying: 98/1024 [MB] (26 MBps) [2024-11-20T21:10:30.651Z] Copying: 112/1024 [MB] (14 MBps) [2024-11-20T21:10:31.224Z] Copying: 127/1024 [MB] (14 MBps) [2024-11-20T21:10:32.612Z] Copying: 145/1024 [MB] (18 MBps) [2024-11-20T21:10:33.558Z] Copying: 163/1024 [MB] (17 MBps) [2024-11-20T21:10:34.502Z] Copying: 178/1024 [MB] (14 MBps) [2024-11-20T21:10:35.448Z] Copying: 194/1024 [MB] (16 MBps) [2024-11-20T21:10:36.391Z] Copying: 205/1024 [MB] (10 MBps) [2024-11-20T21:10:37.335Z] Copying: 222/1024 [MB] (17 MBps) [2024-11-20T21:10:38.280Z] Copying: 235/1024 [MB] (12 MBps) [2024-11-20T21:10:39.223Z] Copying: 245/1024 [MB] (10 MBps) [2024-11-20T21:10:40.609Z] Copying: 256/1024 [MB] (10 MBps) [2024-11-20T21:10:41.553Z] Copying: 273/1024 [MB] (17 MBps) [2024-11-20T21:10:42.497Z] Copying: 288/1024 [MB] (15 MBps) [2024-11-20T21:10:43.441Z] Copying: 300/1024 [MB] (11 MBps) [2024-11-20T21:10:44.386Z] Copying: 319/1024 [MB] (18 MBps) [2024-11-20T21:10:45.331Z] Copying: 334/1024 [MB] (15 MBps) [2024-11-20T21:10:46.274Z] Copying: 355/1024 [MB] (20 MBps) [2024-11-20T21:10:47.217Z] Copying: 370/1024 [MB] (14 MBps) [2024-11-20T21:10:48.605Z] Copying: 383/1024 [MB] (13 MBps) [2024-11-20T21:10:49.550Z] Copying: 395/1024 [MB] (12 MBps) [2024-11-20T21:10:50.496Z] Copying: 406/1024 [MB] (10 MBps) [2024-11-20T21:10:51.523Z] Copying: 417/1024 [MB] (11 MBps) [2024-11-20T21:10:52.467Z] Copying: 432/1024 [MB] (14 MBps) [2024-11-20T21:10:53.411Z] Copying: 447/1024 [MB] (15 MBps) [2024-11-20T21:10:54.354Z] Copying: 462/1024 [MB] (14 MBps) [2024-11-20T21:10:55.296Z] Copying: 473/1024 [MB] (11 MBps) [2024-11-20T21:10:56.237Z] Copying: 490/1024 [MB] (16 MBps) [2024-11-20T21:10:57.621Z] Copying: 501/1024 [MB] (10 MBps) [2024-11-20T21:10:58.561Z] Copying: 512/1024 [MB] (10 MBps) [2024-11-20T21:10:59.500Z] Copying: 528/1024 [MB] (16 MBps) [2024-11-20T21:11:00.442Z] Copying: 544/1024 [MB] (15 MBps) [2024-11-20T21:11:01.387Z] Copying: 557/1024 [MB] (13 MBps) [2024-11-20T21:11:02.331Z] Copying: 567/1024 [MB] (10 MBps) [2024-11-20T21:11:03.277Z] Copying: 578/1024 [MB] (10 MBps) [2024-11-20T21:11:04.220Z] Copying: 596/1024 [MB] (18 MBps) [2024-11-20T21:11:05.610Z] Copying: 616/1024 [MB] (19 MBps) [2024-11-20T21:11:06.555Z] Copying: 628/1024 [MB] (12 MBps) [2024-11-20T21:11:07.500Z] Copying: 649/1024 [MB] (20 MBps) [2024-11-20T21:11:08.445Z] Copying: 663/1024 [MB] (14 MBps) [2024-11-20T21:11:09.405Z] Copying: 674/1024 [MB] (10 MBps) [2024-11-20T21:11:10.349Z] Copying: 687/1024 [MB] (13 MBps) [2024-11-20T21:11:11.292Z] Copying: 698/1024 [MB] (10 MBps) [2024-11-20T21:11:12.234Z] Copying: 714/1024 [MB] (16 MBps) [2024-11-20T21:11:13.620Z] Copying: 731/1024 [MB] (17 MBps) [2024-11-20T21:11:14.566Z] Copying: 742/1024 [MB] (10 MBps) [2024-11-20T21:11:15.511Z] Copying: 758/1024 [MB] (15 MBps) [2024-11-20T21:11:16.457Z] Copying: 772/1024 [MB] (13 MBps) [2024-11-20T21:11:17.403Z] Copying: 782/1024 [MB] (10 MBps) [2024-11-20T21:11:18.347Z] Copying: 799/1024 [MB] (16 MBps) [2024-11-20T21:11:19.293Z] Copying: 823/1024 [MB] (23 MBps) [2024-11-20T21:11:20.240Z] Copying: 839/1024 [MB] (16 MBps) [2024-11-20T21:11:21.629Z] Copying: 854/1024 [MB] (14 MBps) [2024-11-20T21:11:22.575Z] Copying: 869/1024 [MB] (15 MBps) [2024-11-20T21:11:23.597Z] Copying: 883/1024 [MB] (13 MBps) [2024-11-20T21:11:24.540Z] Copying: 901/1024 [MB] (17 MBps) [2024-11-20T21:11:25.482Z] Copying: 911/1024 [MB] (10 MBps) [2024-11-20T21:11:26.426Z] Copying: 925/1024 [MB] (13 MBps) [2024-11-20T21:11:27.369Z] Copying: 942/1024 [MB] (17 MBps) [2024-11-20T21:11:28.311Z] Copying: 958/1024 [MB] (16 MBps) [2024-11-20T21:11:29.253Z] Copying: 976/1024 [MB] (17 MBps) [2024-11-20T21:11:30.639Z] Copying: 991/1024 [MB] (15 MBps) [2024-11-20T21:11:31.213Z] Copying: 1007/1024 [MB] (15 MBps) [2024-11-20T21:11:31.213Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-20 21:11:31.089800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.090103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:13.094 [2024-11-20 21:11:31.090122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:13.094 [2024-11-20 21:11:31.090134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.090170] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:13.094 [2024-11-20 21:11:31.090659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.090681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:13.094 [2024-11-20 21:11:31.090693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:29:13.094 [2024-11-20 21:11:31.090712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.091229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.091352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:13.094 [2024-11-20 21:11:31.091431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:29:13.094 [2024-11-20 21:11:31.091464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.096524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.096627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:13.094 [2024-11-20 21:11:31.096683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.964 ms 00:29:13.094 [2024-11-20 21:11:31.096706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.102890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.103058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:13.094 [2024-11-20 21:11:31.103108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:29:13.094 [2024-11-20 21:11:31.103120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.104715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.104756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:13.094 [2024-11-20 21:11:31.104766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:29:13.094 [2024-11-20 21:11:31.104772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.108606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.108639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:13.094 [2024-11-20 21:11:31.108646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.804 ms 00:29:13.094 [2024-11-20 21:11:31.108652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.112386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.112411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:13.094 [2024-11-20 21:11:31.112419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.709 ms 00:29:13.094 [2024-11-20 21:11:31.112424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.114780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.114804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:13.094 [2024-11-20 21:11:31.114811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.340 ms 00:29:13.094 [2024-11-20 21:11:31.114817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.116924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.116947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:13.094 [2024-11-20 21:11:31.116953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.084 ms 00:29:13.094 [2024-11-20 21:11:31.116958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.119399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.119508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:13.094 [2024-11-20 21:11:31.119541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:29:13.094 [2024-11-20 21:11:31.119563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.122424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.094 [2024-11-20 21:11:31.122501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:13.094 [2024-11-20 21:11:31.122526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.704 ms 00:29:13.094 [2024-11-20 21:11:31.122547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.094 [2024-11-20 21:11:31.122618] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:13.094 [2024-11-20 21:11:31.122657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:13.094 [2024-11-20 21:11:31.122687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:13.094 [2024-11-20 21:11:31.122711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:13.094 [2024-11-20 21:11:31.122735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:13.094 [2024-11-20 21:11:31.122791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:13.094 [2024-11-20 21:11:31.122815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:13.094 [2024-11-20 21:11:31.122838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:13.094 [2024-11-20 21:11:31.122862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.122884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.122909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.122931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.122956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.122979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.123995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.124998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.125022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:13.095 [2024-11-20 21:11:31.125045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:13.096 [2024-11-20 21:11:31.125068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:13.096 [2024-11-20 21:11:31.125091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:13.096 [2024-11-20 21:11:31.125114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:13.096 [2024-11-20 21:11:31.125161] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:13.096 [2024-11-20 21:11:31.125185] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4c9ab7ca-d499-4cca-a63d-6591be32da33 00:29:13.096 [2024-11-20 21:11:31.125209] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:13.096 [2024-11-20 21:11:31.125232] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:13.096 [2024-11-20 21:11:31.125253] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:13.096 [2024-11-20 21:11:31.125275] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:13.096 [2024-11-20 21:11:31.125298] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:13.096 [2024-11-20 21:11:31.125320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:13.096 [2024-11-20 21:11:31.125342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:13.096 [2024-11-20 21:11:31.125361] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:13.096 [2024-11-20 21:11:31.125380] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:13.096 [2024-11-20 21:11:31.125402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.096 [2024-11-20 21:11:31.125424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:13.096 [2024-11-20 21:11:31.125468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.785 ms 00:29:13.096 [2024-11-20 21:11:31.125490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.128427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.096 [2024-11-20 21:11:31.128673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:13.096 [2024-11-20 21:11:31.128851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.894 ms 00:29:13.096 [2024-11-20 21:11:31.128922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.129185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:13.096 [2024-11-20 21:11:31.129784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:13.096 [2024-11-20 21:11:31.130072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:29:13.096 [2024-11-20 21:11:31.130115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.140151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.140417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:13.096 [2024-11-20 21:11:31.140730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.140787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.140937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.140962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:13.096 [2024-11-20 21:11:31.140985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.141007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.141154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.141187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:13.096 [2024-11-20 21:11:31.141212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.141253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.141304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.141331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:13.096 [2024-11-20 21:11:31.141356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.141382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.150463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.150596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:13.096 [2024-11-20 21:11:31.150609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.150617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.157531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.157634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:13.096 [2024-11-20 21:11:31.157679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.157702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.157737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.157769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:13.096 [2024-11-20 21:11:31.157789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.157808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.157859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.157881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:13.096 [2024-11-20 21:11:31.157906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.157969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.158063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.158089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:13.096 [2024-11-20 21:11:31.158109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.158128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.158166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.158188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:13.096 [2024-11-20 21:11:31.158279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.158298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.158343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.158365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:13.096 [2024-11-20 21:11:31.158384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.158437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.158774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:13.096 [2024-11-20 21:11:31.158904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:13.096 [2024-11-20 21:11:31.158967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:13.096 [2024-11-20 21:11:31.159021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:13.096 [2024-11-20 21:11:31.159174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.343 ms, result 0 00:29:13.358 00:29:13.358 00:29:13.358 21:11:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:15.903 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91013 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91013 ']' 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 91013 00:29:15.903 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91013) - No such process 00:29:15.903 Process with pid 91013 is not found 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 91013 is not found' 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:15.903 Remove shared memory files 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:15.903 ************************************ 00:29:15.903 END TEST ftl_dirty_shutdown 00:29:15.903 ************************************ 00:29:15.903 00:29:15.903 real 4m22.465s 00:29:15.903 user 4m48.013s 00:29:15.903 sys 0m26.915s 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.903 21:11:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:16.164 21:11:34 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:16.164 21:11:34 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:16.164 21:11:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:16.164 21:11:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:16.164 ************************************ 00:29:16.164 START TEST ftl_upgrade_shutdown 00:29:16.164 ************************************ 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:16.164 * Looking for test storage... 00:29:16.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:16.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.164 --rc genhtml_branch_coverage=1 00:29:16.164 --rc genhtml_function_coverage=1 00:29:16.164 --rc genhtml_legend=1 00:29:16.164 --rc geninfo_all_blocks=1 00:29:16.164 --rc geninfo_unexecuted_blocks=1 00:29:16.164 00:29:16.164 ' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:16.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.164 --rc genhtml_branch_coverage=1 00:29:16.164 --rc genhtml_function_coverage=1 00:29:16.164 --rc genhtml_legend=1 00:29:16.164 --rc geninfo_all_blocks=1 00:29:16.164 --rc geninfo_unexecuted_blocks=1 00:29:16.164 00:29:16.164 ' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:16.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.164 --rc genhtml_branch_coverage=1 00:29:16.164 --rc genhtml_function_coverage=1 00:29:16.164 --rc genhtml_legend=1 00:29:16.164 --rc geninfo_all_blocks=1 00:29:16.164 --rc geninfo_unexecuted_blocks=1 00:29:16.164 00:29:16.164 ' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:16.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.164 --rc genhtml_branch_coverage=1 00:29:16.164 --rc genhtml_function_coverage=1 00:29:16.164 --rc genhtml_legend=1 00:29:16.164 --rc geninfo_all_blocks=1 00:29:16.164 --rc geninfo_unexecuted_blocks=1 00:29:16.164 00:29:16.164 ' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:16.164 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93837 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93837 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93837 ']' 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.165 21:11:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:16.425 [2024-11-20 21:11:34.323588] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:29:16.425 [2024-11-20 21:11:34.323932] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93837 ] 00:29:16.425 [2024-11-20 21:11:34.471588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.425 [2024-11-20 21:11:34.502699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:17.370 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:17.632 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:17.633 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:17.633 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:17.633 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:17.633 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:17.633 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:17.633 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:17.633 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:17.633 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:17.633 { 00:29:17.633 "name": "basen1", 00:29:17.633 "aliases": [ 00:29:17.633 "69d1821c-171e-44ba-85e3-1ba1b8d7a62c" 00:29:17.633 ], 00:29:17.633 "product_name": "NVMe disk", 00:29:17.633 "block_size": 4096, 00:29:17.633 "num_blocks": 1310720, 00:29:17.633 "uuid": "69d1821c-171e-44ba-85e3-1ba1b8d7a62c", 00:29:17.633 "numa_id": -1, 00:29:17.633 "assigned_rate_limits": { 00:29:17.633 "rw_ios_per_sec": 0, 00:29:17.633 "rw_mbytes_per_sec": 0, 00:29:17.633 "r_mbytes_per_sec": 0, 00:29:17.633 "w_mbytes_per_sec": 0 00:29:17.633 }, 00:29:17.633 "claimed": true, 00:29:17.633 "claim_type": "read_many_write_one", 00:29:17.633 "zoned": false, 00:29:17.633 "supported_io_types": { 00:29:17.633 "read": true, 00:29:17.633 "write": true, 00:29:17.633 "unmap": true, 00:29:17.633 "flush": true, 00:29:17.633 "reset": true, 00:29:17.633 "nvme_admin": true, 00:29:17.633 "nvme_io": true, 00:29:17.633 "nvme_io_md": false, 00:29:17.633 "write_zeroes": true, 00:29:17.633 "zcopy": false, 00:29:17.633 "get_zone_info": false, 00:29:17.633 "zone_management": false, 00:29:17.633 "zone_append": false, 00:29:17.633 "compare": true, 00:29:17.633 "compare_and_write": false, 00:29:17.633 "abort": true, 00:29:17.633 "seek_hole": false, 00:29:17.633 "seek_data": false, 00:29:17.633 "copy": true, 00:29:17.633 "nvme_iov_md": false 00:29:17.633 }, 00:29:17.633 "driver_specific": { 00:29:17.633 "nvme": [ 00:29:17.633 { 00:29:17.633 "pci_address": "0000:00:11.0", 00:29:17.633 "trid": { 00:29:17.633 "trtype": "PCIe", 00:29:17.633 "traddr": "0000:00:11.0" 00:29:17.633 }, 00:29:17.633 "ctrlr_data": { 00:29:17.633 "cntlid": 0, 00:29:17.633 "vendor_id": "0x1b36", 00:29:17.633 "model_number": "QEMU NVMe Ctrl", 00:29:17.633 "serial_number": "12341", 00:29:17.633 "firmware_revision": "8.0.0", 00:29:17.633 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:17.633 "oacs": { 00:29:17.633 "security": 0, 00:29:17.633 "format": 1, 00:29:17.633 "firmware": 0, 00:29:17.633 "ns_manage": 1 00:29:17.633 }, 00:29:17.633 "multi_ctrlr": false, 00:29:17.633 "ana_reporting": false 00:29:17.633 }, 00:29:17.633 "vs": { 00:29:17.633 "nvme_version": "1.4" 00:29:17.633 }, 00:29:17.633 "ns_data": { 00:29:17.633 "id": 1, 00:29:17.633 "can_share": false 00:29:17.633 } 00:29:17.633 } 00:29:17.633 ], 00:29:17.633 "mp_policy": "active_passive" 00:29:17.633 } 00:29:17.633 } 00:29:17.633 ]' 00:29:17.633 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:17.894 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=4b1c9cbc-7737-4aec-995f-bb9d3a3bf7de 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:17.895 21:11:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b1c9cbc-7737-4aec-995f-bb9d3a3bf7de 00:29:18.155 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:18.416 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=bb7730c5-f74c-43df-abfd-b115eb382078 00:29:18.416 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u bb7730c5-f74c-43df-abfd-b115eb382078 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=ab6b5613-8102-4f54-a143-b43bde70e097 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z ab6b5613-8102-4f54-a143-b43bde70e097 ]] 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 ab6b5613-8102-4f54-a143-b43bde70e097 5120 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=ab6b5613-8102-4f54-a143-b43bde70e097 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size ab6b5613-8102-4f54-a143-b43bde70e097 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ab6b5613-8102-4f54-a143-b43bde70e097 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:18.678 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab6b5613-8102-4f54-a143-b43bde70e097 00:29:18.940 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:18.940 { 00:29:18.940 "name": "ab6b5613-8102-4f54-a143-b43bde70e097", 00:29:18.940 "aliases": [ 00:29:18.940 "lvs/basen1p0" 00:29:18.940 ], 00:29:18.940 "product_name": "Logical Volume", 00:29:18.940 "block_size": 4096, 00:29:18.940 "num_blocks": 5242880, 00:29:18.940 "uuid": "ab6b5613-8102-4f54-a143-b43bde70e097", 00:29:18.940 "assigned_rate_limits": { 00:29:18.940 "rw_ios_per_sec": 0, 00:29:18.940 "rw_mbytes_per_sec": 0, 00:29:18.940 "r_mbytes_per_sec": 0, 00:29:18.940 "w_mbytes_per_sec": 0 00:29:18.940 }, 00:29:18.940 "claimed": false, 00:29:18.940 "zoned": false, 00:29:18.940 "supported_io_types": { 00:29:18.940 "read": true, 00:29:18.940 "write": true, 00:29:18.940 "unmap": true, 00:29:18.940 "flush": false, 00:29:18.940 "reset": true, 00:29:18.940 "nvme_admin": false, 00:29:18.940 "nvme_io": false, 00:29:18.940 "nvme_io_md": false, 00:29:18.940 "write_zeroes": true, 00:29:18.940 "zcopy": false, 00:29:18.940 "get_zone_info": false, 00:29:18.940 "zone_management": false, 00:29:18.940 "zone_append": false, 00:29:18.940 "compare": false, 00:29:18.940 "compare_and_write": false, 00:29:18.940 "abort": false, 00:29:18.940 "seek_hole": true, 00:29:18.940 "seek_data": true, 00:29:18.940 "copy": false, 00:29:18.940 "nvme_iov_md": false 00:29:18.940 }, 00:29:18.940 "driver_specific": { 00:29:18.940 "lvol": { 00:29:18.940 "lvol_store_uuid": "bb7730c5-f74c-43df-abfd-b115eb382078", 00:29:18.940 "base_bdev": "basen1", 00:29:18.940 "thin_provision": true, 00:29:18.940 "num_allocated_clusters": 0, 00:29:18.940 "snapshot": false, 00:29:18.940 "clone": false, 00:29:18.940 "esnap_clone": false 00:29:18.940 } 00:29:18.940 } 00:29:18.940 } 00:29:18.940 ]' 00:29:18.940 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:18.940 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:18.940 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:18.940 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:18.940 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:18.940 21:11:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:18.940 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:18.940 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:18.940 21:11:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:19.201 21:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:19.201 21:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:19.201 21:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:19.462 21:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:19.462 21:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:19.462 21:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d ab6b5613-8102-4f54-a143-b43bde70e097 -c cachen1p0 --l2p_dram_limit 2 00:29:19.725 [2024-11-20 21:11:37.618843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.725 [2024-11-20 21:11:37.618883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:19.725 [2024-11-20 21:11:37.618893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:19.725 [2024-11-20 21:11:37.618900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.725 [2024-11-20 21:11:37.618939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.725 [2024-11-20 21:11:37.618948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:19.725 [2024-11-20 21:11:37.618956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:19.725 [2024-11-20 21:11:37.618964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.725 [2024-11-20 21:11:37.618982] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:19.725 [2024-11-20 21:11:37.619158] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:19.725 [2024-11-20 21:11:37.619174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.725 [2024-11-20 21:11:37.619181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:19.725 [2024-11-20 21:11:37.619188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:29:19.725 [2024-11-20 21:11:37.619198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.725 [2024-11-20 21:11:37.619220] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 5c537862-7c69-4a14-9c45-e8599522a66b 00:29:19.725 [2024-11-20 21:11:37.620203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.725 [2024-11-20 21:11:37.620218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:19.725 [2024-11-20 21:11:37.620231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:29:19.725 [2024-11-20 21:11:37.620238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.725 [2024-11-20 21:11:37.624981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.725 [2024-11-20 21:11:37.625008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:19.725 [2024-11-20 21:11:37.625017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.706 ms 00:29:19.725 [2024-11-20 21:11:37.625023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.725 [2024-11-20 21:11:37.625087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.725 [2024-11-20 21:11:37.625096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:19.725 [2024-11-20 21:11:37.625105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:19.725 [2024-11-20 21:11:37.625111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.725 [2024-11-20 21:11:37.625148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.725 [2024-11-20 21:11:37.625158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:19.725 [2024-11-20 21:11:37.625166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:19.725 [2024-11-20 21:11:37.625172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.725 [2024-11-20 21:11:37.625188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:19.725 [2024-11-20 21:11:37.626448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.726 [2024-11-20 21:11:37.626475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:19.726 [2024-11-20 21:11:37.626482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.264 ms 00:29:19.726 [2024-11-20 21:11:37.626489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.726 [2024-11-20 21:11:37.626508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.726 [2024-11-20 21:11:37.626516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:19.726 [2024-11-20 21:11:37.626522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:19.726 [2024-11-20 21:11:37.626530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.726 [2024-11-20 21:11:37.626549] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:19.726 [2024-11-20 21:11:37.626657] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:19.726 [2024-11-20 21:11:37.626668] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:19.726 [2024-11-20 21:11:37.626680] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:19.726 [2024-11-20 21:11:37.626688] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:19.726 [2024-11-20 21:11:37.626696] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:19.726 [2024-11-20 21:11:37.626706] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:19.726 [2024-11-20 21:11:37.626713] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:19.726 [2024-11-20 21:11:37.626719] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:19.726 [2024-11-20 21:11:37.626726] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:19.726 [2024-11-20 21:11:37.626732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.726 [2024-11-20 21:11:37.626739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:19.726 [2024-11-20 21:11:37.626757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:29:19.726 [2024-11-20 21:11:37.626764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.726 [2024-11-20 21:11:37.626828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.726 [2024-11-20 21:11:37.626840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:19.726 [2024-11-20 21:11:37.626846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:29:19.726 [2024-11-20 21:11:37.626852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.726 [2024-11-20 21:11:37.626938] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:19.726 [2024-11-20 21:11:37.626947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:19.726 [2024-11-20 21:11:37.626954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:19.726 [2024-11-20 21:11:37.626985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.626992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:19.726 [2024-11-20 21:11:37.626998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:19.726 [2024-11-20 21:11:37.627010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:19.726 [2024-11-20 21:11:37.627016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:19.726 [2024-11-20 21:11:37.627023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:19.726 [2024-11-20 21:11:37.627035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:19.726 [2024-11-20 21:11:37.627040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:19.726 [2024-11-20 21:11:37.627055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:19.726 [2024-11-20 21:11:37.627062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:19.726 [2024-11-20 21:11:37.627074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:19.726 [2024-11-20 21:11:37.627079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:19.726 [2024-11-20 21:11:37.627090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:19.726 [2024-11-20 21:11:37.627099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:19.726 [2024-11-20 21:11:37.627105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:19.726 [2024-11-20 21:11:37.627112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:19.726 [2024-11-20 21:11:37.627118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:19.726 [2024-11-20 21:11:37.627125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:19.726 [2024-11-20 21:11:37.627130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:19.726 [2024-11-20 21:11:37.627137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:19.726 [2024-11-20 21:11:37.627143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:19.726 [2024-11-20 21:11:37.627151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:19.726 [2024-11-20 21:11:37.627157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:19.726 [2024-11-20 21:11:37.627164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:19.726 [2024-11-20 21:11:37.627169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:19.726 [2024-11-20 21:11:37.627177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:19.726 [2024-11-20 21:11:37.627193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:19.726 [2024-11-20 21:11:37.627198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:19.726 [2024-11-20 21:11:37.627211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:19.726 [2024-11-20 21:11:37.627231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:19.726 [2024-11-20 21:11:37.627236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627243] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:19.726 [2024-11-20 21:11:37.627250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:19.726 [2024-11-20 21:11:37.627259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:19.726 [2024-11-20 21:11:37.627266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:19.726 [2024-11-20 21:11:37.627273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:19.726 [2024-11-20 21:11:37.627279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:19.726 [2024-11-20 21:11:37.627290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:19.726 [2024-11-20 21:11:37.627303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:19.726 [2024-11-20 21:11:37.627310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:19.726 [2024-11-20 21:11:37.627316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:19.726 [2024-11-20 21:11:37.627325] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:19.726 [2024-11-20 21:11:37.627335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:19.726 [2024-11-20 21:11:37.627349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:19.726 [2024-11-20 21:11:37.627371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:19.726 [2024-11-20 21:11:37.627376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:19.726 [2024-11-20 21:11:37.627384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:19.726 [2024-11-20 21:11:37.627389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:19.726 [2024-11-20 21:11:37.627430] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:19.726 [2024-11-20 21:11:37.627435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:19.726 [2024-11-20 21:11:37.627447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:19.727 [2024-11-20 21:11:37.627454] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:19.727 [2024-11-20 21:11:37.627459] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:19.727 [2024-11-20 21:11:37.627465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:19.727 [2024-11-20 21:11:37.627471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:19.727 [2024-11-20 21:11:37.627479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.589 ms 00:29:19.727 [2024-11-20 21:11:37.627485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:19.727 [2024-11-20 21:11:37.627514] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:19.727 [2024-11-20 21:11:37.627521] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:23.032 [2024-11-20 21:11:40.861163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.861279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:23.032 [2024-11-20 21:11:40.861302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3233.624 ms 00:29:23.032 [2024-11-20 21:11:40.861312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.879624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.879693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:23.032 [2024-11-20 21:11:40.879710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.171 ms 00:29:23.032 [2024-11-20 21:11:40.879720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.879832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.879850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:23.032 [2024-11-20 21:11:40.879863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:23.032 [2024-11-20 21:11:40.879872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.896829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.896885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:23.032 [2024-11-20 21:11:40.896900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.913 ms 00:29:23.032 [2024-11-20 21:11:40.896909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.896951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.896962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:23.032 [2024-11-20 21:11:40.896980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:23.032 [2024-11-20 21:11:40.896989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.897681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.897715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:23.032 [2024-11-20 21:11:40.897734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.635 ms 00:29:23.032 [2024-11-20 21:11:40.897792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.897850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.897865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:23.032 [2024-11-20 21:11:40.897878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:23.032 [2024-11-20 21:11:40.897888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.909329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.909375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:23.032 [2024-11-20 21:11:40.909390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.414 ms 00:29:23.032 [2024-11-20 21:11:40.909398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.920847] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:23.032 [2024-11-20 21:11:40.922475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.922525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:23.032 [2024-11-20 21:11:40.922538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.980 ms 00:29:23.032 [2024-11-20 21:11:40.922550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.955004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.955069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:23.032 [2024-11-20 21:11:40.955087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.421 ms 00:29:23.032 [2024-11-20 21:11:40.955103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.955225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.955244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:23.032 [2024-11-20 21:11:40.955255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:29:23.032 [2024-11-20 21:11:40.955266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.959424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.959480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:23.032 [2024-11-20 21:11:40.959492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.120 ms 00:29:23.032 [2024-11-20 21:11:40.959507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.963562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.963616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:23.032 [2024-11-20 21:11:40.963627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.008 ms 00:29:23.032 [2024-11-20 21:11:40.963637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:40.964027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:40.964053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:23.032 [2024-11-20 21:11:40.964064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:29:23.032 [2024-11-20 21:11:40.964078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:41.008527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:41.008583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:23.032 [2024-11-20 21:11:41.008597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.411 ms 00:29:23.032 [2024-11-20 21:11:41.008612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:41.016092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:41.016145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:23.032 [2024-11-20 21:11:41.016158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.420 ms 00:29:23.032 [2024-11-20 21:11:41.016169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:41.020973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:41.021024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:23.032 [2024-11-20 21:11:41.021035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.757 ms 00:29:23.032 [2024-11-20 21:11:41.021045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:41.030001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:41.030399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:23.032 [2024-11-20 21:11:41.030451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.904 ms 00:29:23.032 [2024-11-20 21:11:41.030485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:41.030602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.032 [2024-11-20 21:11:41.030634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:23.032 [2024-11-20 21:11:41.030658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:23.032 [2024-11-20 21:11:41.030683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.032 [2024-11-20 21:11:41.030960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.033 [2024-11-20 21:11:41.031010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:23.033 [2024-11-20 21:11:41.031037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.093 ms 00:29:23.033 [2024-11-20 21:11:41.031075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.033 [2024-11-20 21:11:41.033426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3413.999 ms, result 0 00:29:23.033 { 00:29:23.033 "name": "ftl", 00:29:23.033 "uuid": "5c537862-7c69-4a14-9c45-e8599522a66b" 00:29:23.033 } 00:29:23.033 21:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:23.294 [2024-11-20 21:11:41.256975] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:23.294 21:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:23.556 21:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:23.818 [2024-11-20 21:11:41.677398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:23.818 21:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:23.818 [2024-11-20 21:11:41.897920] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:23.818 21:11:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:24.391 Fill FTL, iteration 1 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93955 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93955 /var/tmp/spdk.tgt.sock 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 93955 ']' 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:24.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.391 21:11:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:24.391 [2024-11-20 21:11:42.345504] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:29:24.391 [2024-11-20 21:11:42.346011] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93955 ] 00:29:24.391 [2024-11-20 21:11:42.490575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.653 [2024-11-20 21:11:42.530627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.226 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.226 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:25.226 21:11:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:25.487 ftln1 00:29:25.487 21:11:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:25.487 21:11:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93955 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93955 ']' 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93955 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93955 00:29:25.749 killing process with pid 93955 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93955' 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93955 00:29:25.749 21:11:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93955 00:29:26.011 21:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:26.011 21:11:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:26.271 [2024-11-20 21:11:44.172337] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:29:26.272 [2024-11-20 21:11:44.172443] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93991 ] 00:29:26.272 [2024-11-20 21:11:44.317372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.272 [2024-11-20 21:11:44.340951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.653  [2024-11-20T21:11:46.708Z] Copying: 200/1024 [MB] (200 MBps) [2024-11-20T21:11:47.649Z] Copying: 436/1024 [MB] (236 MBps) [2024-11-20T21:11:48.601Z] Copying: 683/1024 [MB] (247 MBps) [2024-11-20T21:11:49.167Z] Copying: 912/1024 [MB] (229 MBps) [2024-11-20T21:11:49.425Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:29:31.306 00:29:31.306 21:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:31.306 21:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:31.306 Calculate MD5 checksum, iteration 1 00:29:31.306 21:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:31.306 21:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:31.306 21:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:31.306 21:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:31.306 21:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:31.306 21:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:31.306 [2024-11-20 21:11:49.240372] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:29:31.306 [2024-11-20 21:11:49.240649] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94044 ] 00:29:31.306 [2024-11-20 21:11:49.383167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.306 [2024-11-20 21:11:49.403355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.679  [2024-11-20T21:11:51.366Z] Copying: 676/1024 [MB] (676 MBps) [2024-11-20T21:11:51.366Z] Copying: 1024/1024 [MB] (average 662 MBps) 00:29:33.247 00:29:33.247 21:11:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:33.247 21:11:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:35.794 Fill FTL, iteration 2 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=83c206189037ad737c4ac2036711e543 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:35.794 21:11:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:35.794 [2024-11-20 21:11:53.452706] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:29:35.794 [2024-11-20 21:11:53.452844] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94089 ] 00:29:35.794 [2024-11-20 21:11:53.592544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.794 [2024-11-20 21:11:53.614692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.731  [2024-11-20T21:11:55.840Z] Copying: 241/1024 [MB] (241 MBps) [2024-11-20T21:11:57.214Z] Copying: 488/1024 [MB] (247 MBps) [2024-11-20T21:11:58.162Z] Copying: 723/1024 [MB] (235 MBps) [2024-11-20T21:11:58.162Z] Copying: 965/1024 [MB] (242 MBps) [2024-11-20T21:11:58.419Z] Copying: 1024/1024 [MB] (average 240 MBps) 00:29:40.300 00:29:40.300 Calculate MD5 checksum, iteration 2 00:29:40.300 21:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:40.300 21:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:40.300 21:11:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:40.300 21:11:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:40.300 21:11:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:40.300 21:11:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:40.300 21:11:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:40.300 21:11:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:40.300 [2024-11-20 21:11:58.293778] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:29:40.300 [2024-11-20 21:11:58.293889] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94141 ] 00:29:40.556 [2024-11-20 21:11:58.434622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:40.556 [2024-11-20 21:11:58.458293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.929  [2024-11-20T21:12:00.615Z] Copying: 632/1024 [MB] (632 MBps) [2024-11-20T21:12:01.185Z] Copying: 1024/1024 [MB] (average 607 MBps) 00:29:43.066 00:29:43.066 21:12:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:43.066 21:12:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:45.613 21:12:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:45.613 21:12:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=40871226f9cb05b62b305a151a24021d 00:29:45.613 21:12:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:45.613 21:12:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:45.613 21:12:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:45.613 [2024-11-20 21:12:03.435741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.613 [2024-11-20 21:12:03.435924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:45.613 [2024-11-20 21:12:03.435975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:45.613 [2024-11-20 21:12:03.435994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.613 [2024-11-20 21:12:03.436035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.613 [2024-11-20 21:12:03.436053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:45.613 [2024-11-20 21:12:03.436068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:45.613 [2024-11-20 21:12:03.436082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.613 [2024-11-20 21:12:03.436106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.613 [2024-11-20 21:12:03.436123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:45.613 [2024-11-20 21:12:03.436139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:45.613 [2024-11-20 21:12:03.436183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.613 [2024-11-20 21:12:03.436341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.581 ms, result 0 00:29:45.613 true 00:29:45.613 21:12:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:45.613 { 00:29:45.613 "name": "ftl", 00:29:45.613 "properties": [ 00:29:45.613 { 00:29:45.613 "name": "superblock_version", 00:29:45.613 "value": 5, 00:29:45.613 "read-only": true 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "name": "base_device", 00:29:45.613 "bands": [ 00:29:45.613 { 00:29:45.613 "id": 0, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 1, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 2, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 3, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 4, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 5, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 6, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 7, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 8, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 9, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 10, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 11, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 12, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 13, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 14, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 15, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 16, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 17, 00:29:45.613 "state": "FREE", 00:29:45.613 "validity": 0.0 00:29:45.613 } 00:29:45.613 ], 00:29:45.613 "read-only": true 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "name": "cache_device", 00:29:45.613 "type": "bdev", 00:29:45.613 "chunks": [ 00:29:45.613 { 00:29:45.613 "id": 0, 00:29:45.613 "state": "INACTIVE", 00:29:45.613 "utilization": 0.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 1, 00:29:45.613 "state": "CLOSED", 00:29:45.613 "utilization": 1.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 2, 00:29:45.613 "state": "CLOSED", 00:29:45.613 "utilization": 1.0 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 3, 00:29:45.613 "state": "OPEN", 00:29:45.613 "utilization": 0.001953125 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "id": 4, 00:29:45.613 "state": "OPEN", 00:29:45.613 "utilization": 0.0 00:29:45.613 } 00:29:45.613 ], 00:29:45.613 "read-only": true 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "name": "verbose_mode", 00:29:45.613 "value": true, 00:29:45.613 "unit": "", 00:29:45.613 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:45.613 }, 00:29:45.613 { 00:29:45.613 "name": "prep_upgrade_on_shutdown", 00:29:45.613 "value": false, 00:29:45.613 "unit": "", 00:29:45.613 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:45.613 } 00:29:45.613 ] 00:29:45.613 } 00:29:45.613 21:12:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:45.875 [2024-11-20 21:12:03.844093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.875 [2024-11-20 21:12:03.844239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:45.875 [2024-11-20 21:12:03.844280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:45.875 [2024-11-20 21:12:03.844299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.875 [2024-11-20 21:12:03.844331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.875 [2024-11-20 21:12:03.844347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:45.875 [2024-11-20 21:12:03.844362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:45.875 [2024-11-20 21:12:03.844376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.875 [2024-11-20 21:12:03.844400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:45.875 [2024-11-20 21:12:03.844415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:45.875 [2024-11-20 21:12:03.844430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:45.875 [2024-11-20 21:12:03.844468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:45.875 [2024-11-20 21:12:03.844528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.422 ms, result 0 00:29:45.875 true 00:29:45.875 21:12:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:45.875 21:12:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:45.875 21:12:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:46.137 21:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:46.137 21:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:46.137 21:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:46.137 [2024-11-20 21:12:04.252441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.137 [2024-11-20 21:12:04.252477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:46.137 [2024-11-20 21:12:04.252486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:46.137 [2024-11-20 21:12:04.252492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.137 [2024-11-20 21:12:04.252508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.137 [2024-11-20 21:12:04.252515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:46.137 [2024-11-20 21:12:04.252520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:46.137 [2024-11-20 21:12:04.252526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.137 [2024-11-20 21:12:04.252540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.137 [2024-11-20 21:12:04.252547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:46.137 [2024-11-20 21:12:04.252553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:46.398 [2024-11-20 21:12:04.252558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.398 [2024-11-20 21:12:04.252600] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.151 ms, result 0 00:29:46.398 true 00:29:46.398 21:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:46.398 { 00:29:46.398 "name": "ftl", 00:29:46.398 "properties": [ 00:29:46.398 { 00:29:46.398 "name": "superblock_version", 00:29:46.398 "value": 5, 00:29:46.398 "read-only": true 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "name": "base_device", 00:29:46.398 "bands": [ 00:29:46.398 { 00:29:46.398 "id": 0, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 1, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 2, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 3, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 4, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 5, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 6, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 7, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 8, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 9, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 10, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 11, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 12, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 13, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 14, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 15, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 16, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 17, 00:29:46.398 "state": "FREE", 00:29:46.398 "validity": 0.0 00:29:46.398 } 00:29:46.398 ], 00:29:46.398 "read-only": true 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "name": "cache_device", 00:29:46.398 "type": "bdev", 00:29:46.398 "chunks": [ 00:29:46.398 { 00:29:46.398 "id": 0, 00:29:46.398 "state": "INACTIVE", 00:29:46.398 "utilization": 0.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 1, 00:29:46.398 "state": "CLOSED", 00:29:46.398 "utilization": 1.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 2, 00:29:46.398 "state": "CLOSED", 00:29:46.398 "utilization": 1.0 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 3, 00:29:46.398 "state": "OPEN", 00:29:46.398 "utilization": 0.001953125 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "id": 4, 00:29:46.398 "state": "OPEN", 00:29:46.398 "utilization": 0.0 00:29:46.398 } 00:29:46.398 ], 00:29:46.398 "read-only": true 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "name": "verbose_mode", 00:29:46.398 "value": true, 00:29:46.398 "unit": "", 00:29:46.398 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:46.398 }, 00:29:46.398 { 00:29:46.398 "name": "prep_upgrade_on_shutdown", 00:29:46.398 "value": true, 00:29:46.398 "unit": "", 00:29:46.398 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:46.399 } 00:29:46.399 ] 00:29:46.399 } 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93837 ]] 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93837 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 93837 ']' 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 93837 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93837 00:29:46.399 killing process with pid 93837 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93837' 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 93837 00:29:46.399 21:12:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 93837 00:29:46.659 [2024-11-20 21:12:04.573187] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:46.659 [2024-11-20 21:12:04.577071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.659 [2024-11-20 21:12:04.577101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:46.659 [2024-11-20 21:12:04.577110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:46.659 [2024-11-20 21:12:04.577117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:46.659 [2024-11-20 21:12:04.577134] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:46.659 [2024-11-20 21:12:04.577517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:46.659 [2024-11-20 21:12:04.577539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:46.659 [2024-11-20 21:12:04.577546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.374 ms 00:29:46.659 [2024-11-20 21:12:04.577555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.561309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.561371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:56.668 [2024-11-20 21:12:13.561383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8983.703 ms 00:29:56.668 [2024-11-20 21:12:13.561391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.562753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.562768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:56.668 [2024-11-20 21:12:13.562782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.340 ms 00:29:56.668 [2024-11-20 21:12:13.562788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.563654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.563673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:56.668 [2024-11-20 21:12:13.563684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.844 ms 00:29:56.668 [2024-11-20 21:12:13.563690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.565390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.565419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:56.668 [2024-11-20 21:12:13.565426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.656 ms 00:29:56.668 [2024-11-20 21:12:13.565432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.568006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.568035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:56.668 [2024-11-20 21:12:13.568042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.548 ms 00:29:56.668 [2024-11-20 21:12:13.568048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.568108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.568115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:56.668 [2024-11-20 21:12:13.568122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:56.668 [2024-11-20 21:12:13.568127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.569533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.569563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:56.668 [2024-11-20 21:12:13.569570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.388 ms 00:29:56.668 [2024-11-20 21:12:13.569576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.570945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.571100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:56.668 [2024-11-20 21:12:13.571112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.345 ms 00:29:56.668 [2024-11-20 21:12:13.571117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.572215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.572242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:56.668 [2024-11-20 21:12:13.572249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.074 ms 00:29:56.668 [2024-11-20 21:12:13.572254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.573266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.573294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:56.668 [2024-11-20 21:12:13.573301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.967 ms 00:29:56.668 [2024-11-20 21:12:13.573306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.573328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:56.668 [2024-11-20 21:12:13.573339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:56.668 [2024-11-20 21:12:13.573346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:56.668 [2024-11-20 21:12:13.573352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:56.668 [2024-11-20 21:12:13.573358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:56.668 [2024-11-20 21:12:13.573445] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:56.668 [2024-11-20 21:12:13.573451] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5c537862-7c69-4a14-9c45-e8599522a66b 00:29:56.668 [2024-11-20 21:12:13.573457] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:56.668 [2024-11-20 21:12:13.573462] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:56.668 [2024-11-20 21:12:13.573467] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:56.668 [2024-11-20 21:12:13.573476] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:56.668 [2024-11-20 21:12:13.573482] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:56.668 [2024-11-20 21:12:13.573487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:56.668 [2024-11-20 21:12:13.573493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:56.668 [2024-11-20 21:12:13.573498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:56.668 [2024-11-20 21:12:13.573504] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:56.668 [2024-11-20 21:12:13.573510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.573517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:56.668 [2024-11-20 21:12:13.573524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:29:56.668 [2024-11-20 21:12:13.573530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.574988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.575028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:56.668 [2024-11-20 21:12:13.575045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.446 ms 00:29:56.668 [2024-11-20 21:12:13.575060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.575136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.668 [2024-11-20 21:12:13.575208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:56.668 [2024-11-20 21:12:13.575226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:56.668 [2024-11-20 21:12:13.575240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.579832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.668 [2024-11-20 21:12:13.579932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:56.668 [2024-11-20 21:12:13.579977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.668 [2024-11-20 21:12:13.580002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.580036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.668 [2024-11-20 21:12:13.580052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:56.668 [2024-11-20 21:12:13.580066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.668 [2024-11-20 21:12:13.580081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.668 [2024-11-20 21:12:13.580128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.668 [2024-11-20 21:12:13.580153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:56.668 [2024-11-20 21:12:13.580169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.669 [2024-11-20 21:12:13.580213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:13.580238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.669 [2024-11-20 21:12:13.580254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:56.669 [2024-11-20 21:12:13.580269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.669 [2024-11-20 21:12:13.580286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:13.588404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.669 [2024-11-20 21:12:13.588530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:56.669 [2024-11-20 21:12:13.588569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.669 [2024-11-20 21:12:13.588587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:13.595300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.669 [2024-11-20 21:12:13.595416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:56.669 [2024-11-20 21:12:13.595456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.669 [2024-11-20 21:12:13.595474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:13.595535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.669 [2024-11-20 21:12:13.595553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:56.669 [2024-11-20 21:12:13.595575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.669 [2024-11-20 21:12:13.595595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:13.595633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.669 [2024-11-20 21:12:13.595650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:56.669 [2024-11-20 21:12:13.595702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.669 [2024-11-20 21:12:13.595720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:13.595794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.669 [2024-11-20 21:12:13.595814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:56.669 [2024-11-20 21:12:13.595829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.669 [2024-11-20 21:12:13.595847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:13.595883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.669 [2024-11-20 21:12:13.595900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:56.669 [2024-11-20 21:12:13.595915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.669 [2024-11-20 21:12:13.595958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:13.595998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.669 [2024-11-20 21:12:13.596040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:56.669 [2024-11-20 21:12:13.596058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.669 [2024-11-20 21:12:13.596282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:13.596334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:56.669 [2024-11-20 21:12:13.596352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:56.669 [2024-11-20 21:12:13.596367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:56.669 [2024-11-20 21:12:13.596382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:13.596489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9019.359 ms, result 0 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94319 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:56.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94319 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94319 ']' 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.669 21:12:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:56.669 [2024-11-20 21:12:13.886215] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:29:56.669 [2024-11-20 21:12:13.886336] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94319 ] 00:29:56.669 [2024-11-20 21:12:14.028173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.669 [2024-11-20 21:12:14.051702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.669 [2024-11-20 21:12:14.302237] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:56.669 [2024-11-20 21:12:14.302296] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:56.669 [2024-11-20 21:12:14.447830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.669 [2024-11-20 21:12:14.447865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:56.669 [2024-11-20 21:12:14.447878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:56.669 [2024-11-20 21:12:14.447885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:14.447926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.669 [2024-11-20 21:12:14.447934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:56.669 [2024-11-20 21:12:14.447942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:29:56.669 [2024-11-20 21:12:14.447947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:14.447961] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:56.669 [2024-11-20 21:12:14.448140] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:56.669 [2024-11-20 21:12:14.448152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.669 [2024-11-20 21:12:14.448158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:56.669 [2024-11-20 21:12:14.448164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.194 ms 00:29:56.669 [2024-11-20 21:12:14.448169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:14.449104] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:56.669 [2024-11-20 21:12:14.451228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.669 [2024-11-20 21:12:14.451370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:56.669 [2024-11-20 21:12:14.451392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.126 ms 00:29:56.669 [2024-11-20 21:12:14.451401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:14.451443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.669 [2024-11-20 21:12:14.451451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:56.669 [2024-11-20 21:12:14.451457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:56.669 [2024-11-20 21:12:14.451463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:14.455882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.669 [2024-11-20 21:12:14.455908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:56.669 [2024-11-20 21:12:14.455916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.379 ms 00:29:56.669 [2024-11-20 21:12:14.455921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:14.455952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.669 [2024-11-20 21:12:14.455959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:56.669 [2024-11-20 21:12:14.455965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:56.669 [2024-11-20 21:12:14.455970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:14.456006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.669 [2024-11-20 21:12:14.456015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:56.669 [2024-11-20 21:12:14.456023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:56.669 [2024-11-20 21:12:14.456031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:14.456048] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:56.669 [2024-11-20 21:12:14.457181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.669 [2024-11-20 21:12:14.457205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:56.669 [2024-11-20 21:12:14.457212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.138 ms 00:29:56.669 [2024-11-20 21:12:14.457218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:14.457238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.669 [2024-11-20 21:12:14.457247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:56.669 [2024-11-20 21:12:14.457253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:56.669 [2024-11-20 21:12:14.457262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.669 [2024-11-20 21:12:14.457277] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:56.669 [2024-11-20 21:12:14.457290] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:56.669 [2024-11-20 21:12:14.457316] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:56.669 [2024-11-20 21:12:14.457326] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:56.669 [2024-11-20 21:12:14.457408] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:56.670 [2024-11-20 21:12:14.457418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:56.670 [2024-11-20 21:12:14.457425] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:56.670 [2024-11-20 21:12:14.457433] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:56.670 [2024-11-20 21:12:14.457442] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:56.670 [2024-11-20 21:12:14.457448] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:56.670 [2024-11-20 21:12:14.457456] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:56.670 [2024-11-20 21:12:14.457461] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:56.670 [2024-11-20 21:12:14.457467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:56.670 [2024-11-20 21:12:14.457472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.670 [2024-11-20 21:12:14.457478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:56.670 [2024-11-20 21:12:14.457485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.197 ms 00:29:56.670 [2024-11-20 21:12:14.457493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.670 [2024-11-20 21:12:14.457558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.670 [2024-11-20 21:12:14.457567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:56.670 [2024-11-20 21:12:14.457572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:56.670 [2024-11-20 21:12:14.457577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.670 [2024-11-20 21:12:14.457653] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:56.670 [2024-11-20 21:12:14.457660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:56.670 [2024-11-20 21:12:14.457667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:56.670 [2024-11-20 21:12:14.457674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:56.670 [2024-11-20 21:12:14.457685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:56.670 [2024-11-20 21:12:14.457695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:56.670 [2024-11-20 21:12:14.457701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:56.670 [2024-11-20 21:12:14.457706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:56.670 [2024-11-20 21:12:14.457719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:56.670 [2024-11-20 21:12:14.457724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:56.670 [2024-11-20 21:12:14.457735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:56.670 [2024-11-20 21:12:14.457739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:56.670 [2024-11-20 21:12:14.457767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:56.670 [2024-11-20 21:12:14.457772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:56.670 [2024-11-20 21:12:14.457782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:56.670 [2024-11-20 21:12:14.457787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.670 [2024-11-20 21:12:14.457806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:56.670 [2024-11-20 21:12:14.457812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:56.670 [2024-11-20 21:12:14.457817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.670 [2024-11-20 21:12:14.457822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:56.670 [2024-11-20 21:12:14.457827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:56.670 [2024-11-20 21:12:14.457836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.670 [2024-11-20 21:12:14.457842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:56.670 [2024-11-20 21:12:14.457848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:56.670 [2024-11-20 21:12:14.457853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:56.670 [2024-11-20 21:12:14.457859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:56.670 [2024-11-20 21:12:14.457868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:56.670 [2024-11-20 21:12:14.457874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:56.670 [2024-11-20 21:12:14.457885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:56.670 [2024-11-20 21:12:14.457890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:56.670 [2024-11-20 21:12:14.457901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:56.670 [2024-11-20 21:12:14.457919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:56.670 [2024-11-20 21:12:14.457925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457931] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:56.670 [2024-11-20 21:12:14.457937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:56.670 [2024-11-20 21:12:14.457960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:56.670 [2024-11-20 21:12:14.457970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:56.670 [2024-11-20 21:12:14.457976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:56.670 [2024-11-20 21:12:14.457984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:56.670 [2024-11-20 21:12:14.457989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:56.670 [2024-11-20 21:12:14.457995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:56.670 [2024-11-20 21:12:14.458000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:56.670 [2024-11-20 21:12:14.458006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:56.670 [2024-11-20 21:12:14.458012] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:56.670 [2024-11-20 21:12:14.458018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:56.670 [2024-11-20 21:12:14.458030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:56.670 [2024-11-20 21:12:14.458045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:56.670 [2024-11-20 21:12:14.458051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:56.670 [2024-11-20 21:12:14.458055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:56.670 [2024-11-20 21:12:14.458061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:56.670 [2024-11-20 21:12:14.458099] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:56.670 [2024-11-20 21:12:14.458107] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:56.670 [2024-11-20 21:12:14.458118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:56.670 [2024-11-20 21:12:14.458123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:56.670 [2024-11-20 21:12:14.458128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:56.670 [2024-11-20 21:12:14.458134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.670 [2024-11-20 21:12:14.458143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:56.670 [2024-11-20 21:12:14.458151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:29:56.670 [2024-11-20 21:12:14.458156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.670 [2024-11-20 21:12:14.458186] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:56.670 [2024-11-20 21:12:14.458195] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:00.879 [2024-11-20 21:12:18.203726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.203798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:00.879 [2024-11-20 21:12:18.203813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3745.524 ms 00:30:00.879 [2024-11-20 21:12:18.203822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.212705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.212760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:00.879 [2024-11-20 21:12:18.212772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.805 ms 00:30:00.879 [2024-11-20 21:12:18.212780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.212844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.212854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:00.879 [2024-11-20 21:12:18.212863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:00.879 [2024-11-20 21:12:18.212878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.222214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.222260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:00.879 [2024-11-20 21:12:18.222271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.298 ms 00:30:00.879 [2024-11-20 21:12:18.222279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.222307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.222315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:00.879 [2024-11-20 21:12:18.222324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:00.879 [2024-11-20 21:12:18.222334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.222720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.222738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:00.879 [2024-11-20 21:12:18.222783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:30:00.879 [2024-11-20 21:12:18.222792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.222833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.222842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:00.879 [2024-11-20 21:12:18.222851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:00.879 [2024-11-20 21:12:18.222859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.229013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.229046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:00.879 [2024-11-20 21:12:18.229055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.121 ms 00:30:00.879 [2024-11-20 21:12:18.229062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.231954] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:00.879 [2024-11-20 21:12:18.231993] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:00.879 [2024-11-20 21:12:18.232004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.232012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:00.879 [2024-11-20 21:12:18.232021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.843 ms 00:30:00.879 [2024-11-20 21:12:18.232028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.236156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.236192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:00.879 [2024-11-20 21:12:18.236202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.105 ms 00:30:00.879 [2024-11-20 21:12:18.236209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.238227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.238372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:00.879 [2024-11-20 21:12:18.238388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.977 ms 00:30:00.879 [2024-11-20 21:12:18.238396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.240251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.240284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:00.879 [2024-11-20 21:12:18.240293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.832 ms 00:30:00.879 [2024-11-20 21:12:18.240300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.240713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.240731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:00.879 [2024-11-20 21:12:18.240740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.337 ms 00:30:00.879 [2024-11-20 21:12:18.240761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.269389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.269449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:00.879 [2024-11-20 21:12:18.269465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.607 ms 00:30:00.879 [2024-11-20 21:12:18.269474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.277293] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:00.879 [2024-11-20 21:12:18.278253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.278292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:00.879 [2024-11-20 21:12:18.278309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.729 ms 00:30:00.879 [2024-11-20 21:12:18.278317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.278388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.278400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:00.879 [2024-11-20 21:12:18.278410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:00.879 [2024-11-20 21:12:18.278419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.278464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.278473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:00.879 [2024-11-20 21:12:18.278486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:00.879 [2024-11-20 21:12:18.278496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.278517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.278526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:00.879 [2024-11-20 21:12:18.278537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:00.879 [2024-11-20 21:12:18.278545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.278578] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:00.879 [2024-11-20 21:12:18.278589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.278597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:00.879 [2024-11-20 21:12:18.278605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:00.879 [2024-11-20 21:12:18.278615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.282603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.282642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:00.879 [2024-11-20 21:12:18.282652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.968 ms 00:30:00.879 [2024-11-20 21:12:18.282660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.282735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.879 [2024-11-20 21:12:18.282773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:00.879 [2024-11-20 21:12:18.282783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:00.879 [2024-11-20 21:12:18.282791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.879 [2024-11-20 21:12:18.283814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3835.526 ms, result 0 00:30:00.880 [2024-11-20 21:12:18.298989] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.880 [2024-11-20 21:12:18.314991] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:00.880 [2024-11-20 21:12:18.323123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:00.880 21:12:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.880 21:12:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:00.880 21:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:00.880 21:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:00.880 21:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:00.880 [2024-11-20 21:12:18.567225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.880 [2024-11-20 21:12:18.567287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:00.880 [2024-11-20 21:12:18.567301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:00.880 [2024-11-20 21:12:18.567310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.880 [2024-11-20 21:12:18.567334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.880 [2024-11-20 21:12:18.567344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:00.880 [2024-11-20 21:12:18.567353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:00.880 [2024-11-20 21:12:18.567364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.880 [2024-11-20 21:12:18.567386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:00.880 [2024-11-20 21:12:18.567395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:00.880 [2024-11-20 21:12:18.567403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:00.880 [2024-11-20 21:12:18.567411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:00.880 [2024-11-20 21:12:18.567471] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.238 ms, result 0 00:30:00.880 true 00:30:00.880 21:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:00.880 { 00:30:00.880 "name": "ftl", 00:30:00.880 "properties": [ 00:30:00.880 { 00:30:00.880 "name": "superblock_version", 00:30:00.880 "value": 5, 00:30:00.880 "read-only": true 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "name": "base_device", 00:30:00.880 "bands": [ 00:30:00.880 { 00:30:00.880 "id": 0, 00:30:00.880 "state": "CLOSED", 00:30:00.880 "validity": 1.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 1, 00:30:00.880 "state": "CLOSED", 00:30:00.880 "validity": 1.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 2, 00:30:00.880 "state": "CLOSED", 00:30:00.880 "validity": 0.007843137254901933 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 3, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 4, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 5, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 6, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 7, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 8, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 9, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 10, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 11, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 12, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 13, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 14, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 15, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 16, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 17, 00:30:00.880 "state": "FREE", 00:30:00.880 "validity": 0.0 00:30:00.880 } 00:30:00.880 ], 00:30:00.880 "read-only": true 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "name": "cache_device", 00:30:00.880 "type": "bdev", 00:30:00.880 "chunks": [ 00:30:00.880 { 00:30:00.880 "id": 0, 00:30:00.880 "state": "INACTIVE", 00:30:00.880 "utilization": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 1, 00:30:00.880 "state": "OPEN", 00:30:00.880 "utilization": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 2, 00:30:00.880 "state": "OPEN", 00:30:00.880 "utilization": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 3, 00:30:00.880 "state": "FREE", 00:30:00.880 "utilization": 0.0 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "id": 4, 00:30:00.880 "state": "FREE", 00:30:00.880 "utilization": 0.0 00:30:00.880 } 00:30:00.880 ], 00:30:00.880 "read-only": true 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "name": "verbose_mode", 00:30:00.880 "value": true, 00:30:00.880 "unit": "", 00:30:00.880 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:00.880 }, 00:30:00.880 { 00:30:00.880 "name": "prep_upgrade_on_shutdown", 00:30:00.880 "value": false, 00:30:00.880 "unit": "", 00:30:00.880 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:00.880 } 00:30:00.880 ] 00:30:00.880 } 00:30:00.880 21:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:00.880 21:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:00.880 21:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:01.142 Validate MD5 checksum, iteration 1 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:01.142 21:12:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:01.404 [2024-11-20 21:12:19.322816] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:30:01.404 [2024-11-20 21:12:19.323202] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94394 ] 00:30:01.404 [2024-11-20 21:12:19.477341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.404 [2024-11-20 21:12:19.517708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.318  [2024-11-20T21:12:22.004Z] Copying: 466/1024 [MB] (466 MBps) [2024-11-20T21:12:22.572Z] Copying: 1024/1024 [MB] (average 525 MBps) 00:30:04.453 00:30:04.453 21:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:04.453 21:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=83c206189037ad737c4ac2036711e543 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 83c206189037ad737c4ac2036711e543 != \8\3\c\2\0\6\1\8\9\0\3\7\a\d\7\3\7\c\4\a\c\2\0\3\6\7\1\1\e\5\4\3 ]] 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:06.365 Validate MD5 checksum, iteration 2 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:06.365 21:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:06.365 [2024-11-20 21:12:24.143162] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:30:06.365 [2024-11-20 21:12:24.143412] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94451 ] 00:30:06.365 [2024-11-20 21:12:24.281585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.365 [2024-11-20 21:12:24.303352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.741  [2024-11-20T21:12:26.430Z] Copying: 670/1024 [MB] (670 MBps) [2024-11-20T21:12:29.730Z] Copying: 1024/1024 [MB] (average 650 MBps) 00:30:11.611 00:30:11.611 21:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:11.611 21:12:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=40871226f9cb05b62b305a151a24021d 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 40871226f9cb05b62b305a151a24021d != \4\0\8\7\1\2\2\6\f\9\c\b\0\5\b\6\2\b\3\0\5\a\1\5\1\a\2\4\0\2\1\d ]] 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94319 ]] 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94319 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:13.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94529 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94529 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94529 ']' 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:13.525 21:12:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:13.525 [2024-11-20 21:12:31.257020] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:30:13.525 [2024-11-20 21:12:31.257134] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94529 ] 00:30:13.525 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94319 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:13.525 [2024-11-20 21:12:31.391729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.525 [2024-11-20 21:12:31.408207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.788 [2024-11-20 21:12:31.657308] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:13.788 [2024-11-20 21:12:31.657359] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:13.788 [2024-11-20 21:12:31.799008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.799043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:13.788 [2024-11-20 21:12:31.799052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:13.788 [2024-11-20 21:12:31.799060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.799097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.799107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:13.788 [2024-11-20 21:12:31.799115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:13.788 [2024-11-20 21:12:31.799120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.799134] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:13.788 [2024-11-20 21:12:31.799303] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:13.788 [2024-11-20 21:12:31.799315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.799320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:13.788 [2024-11-20 21:12:31.799327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.184 ms 00:30:13.788 [2024-11-20 21:12:31.799332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.799536] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:13.788 [2024-11-20 21:12:31.803336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.803366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:13.788 [2024-11-20 21:12:31.803377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.800 ms 00:30:13.788 [2024-11-20 21:12:31.803383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.804162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.804186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:13.788 [2024-11-20 21:12:31.804195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:30:13.788 [2024-11-20 21:12:31.804204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.804418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.804429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:13.788 [2024-11-20 21:12:31.804437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:30:13.788 [2024-11-20 21:12:31.804443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.804470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.804476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:13.788 [2024-11-20 21:12:31.804482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:13.788 [2024-11-20 21:12:31.804488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.804506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.804512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:13.788 [2024-11-20 21:12:31.804520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:13.788 [2024-11-20 21:12:31.804527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.804541] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:13.788 [2024-11-20 21:12:31.805233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.805249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:13.788 [2024-11-20 21:12:31.805260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.694 ms 00:30:13.788 [2024-11-20 21:12:31.805265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.805283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.805289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:13.788 [2024-11-20 21:12:31.805296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:13.788 [2024-11-20 21:12:31.805302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.805317] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:13.788 [2024-11-20 21:12:31.805331] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:13.788 [2024-11-20 21:12:31.805358] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:13.788 [2024-11-20 21:12:31.805372] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:13.788 [2024-11-20 21:12:31.805451] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:13.788 [2024-11-20 21:12:31.805460] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:13.788 [2024-11-20 21:12:31.805468] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:13.788 [2024-11-20 21:12:31.805476] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:13.788 [2024-11-20 21:12:31.805483] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:13.788 [2024-11-20 21:12:31.805489] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:13.788 [2024-11-20 21:12:31.805494] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:13.788 [2024-11-20 21:12:31.805499] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:13.788 [2024-11-20 21:12:31.805505] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:13.788 [2024-11-20 21:12:31.805511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.805516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:13.788 [2024-11-20 21:12:31.805524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:30:13.788 [2024-11-20 21:12:31.805529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.805593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.788 [2024-11-20 21:12:31.805599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:13.788 [2024-11-20 21:12:31.805605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:13.788 [2024-11-20 21:12:31.805614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.788 [2024-11-20 21:12:31.805687] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:13.788 [2024-11-20 21:12:31.805695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:13.788 [2024-11-20 21:12:31.805703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:13.788 [2024-11-20 21:12:31.805709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:13.788 [2024-11-20 21:12:31.805716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:13.788 [2024-11-20 21:12:31.805721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:13.788 [2024-11-20 21:12:31.805726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:13.788 [2024-11-20 21:12:31.805731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:13.788 [2024-11-20 21:12:31.805737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:13.788 [2024-11-20 21:12:31.805742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:13.788 [2024-11-20 21:12:31.805762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:13.788 [2024-11-20 21:12:31.805768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:13.788 [2024-11-20 21:12:31.805774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:13.788 [2024-11-20 21:12:31.805779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:13.788 [2024-11-20 21:12:31.805784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:13.788 [2024-11-20 21:12:31.805792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:13.788 [2024-11-20 21:12:31.805798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:13.788 [2024-11-20 21:12:31.805802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:13.788 [2024-11-20 21:12:31.805808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:13.788 [2024-11-20 21:12:31.805814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:13.788 [2024-11-20 21:12:31.805819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:13.788 [2024-11-20 21:12:31.805824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:13.788 [2024-11-20 21:12:31.805829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:13.788 [2024-11-20 21:12:31.805834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:13.788 [2024-11-20 21:12:31.805839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:13.788 [2024-11-20 21:12:31.805845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:13.788 [2024-11-20 21:12:31.805849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:13.789 [2024-11-20 21:12:31.805854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:13.789 [2024-11-20 21:12:31.805859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:13.789 [2024-11-20 21:12:31.805864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:13.789 [2024-11-20 21:12:31.805869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:13.789 [2024-11-20 21:12:31.805877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:13.789 [2024-11-20 21:12:31.805882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:13.789 [2024-11-20 21:12:31.805888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:13.789 [2024-11-20 21:12:31.805894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:13.789 [2024-11-20 21:12:31.805900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:13.789 [2024-11-20 21:12:31.805905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:13.789 [2024-11-20 21:12:31.805911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:13.789 [2024-11-20 21:12:31.805916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:13.789 [2024-11-20 21:12:31.805922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:13.789 [2024-11-20 21:12:31.805928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:13.789 [2024-11-20 21:12:31.805933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:13.789 [2024-11-20 21:12:31.805948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:13.789 [2024-11-20 21:12:31.805954] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:13.789 [2024-11-20 21:12:31.805962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:13.789 [2024-11-20 21:12:31.805968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:13.789 [2024-11-20 21:12:31.805974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:13.789 [2024-11-20 21:12:31.805983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:13.789 [2024-11-20 21:12:31.805989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:13.789 [2024-11-20 21:12:31.805995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:13.789 [2024-11-20 21:12:31.806000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:13.789 [2024-11-20 21:12:31.806006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:13.789 [2024-11-20 21:12:31.806012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:13.789 [2024-11-20 21:12:31.806018] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:13.789 [2024-11-20 21:12:31.806026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:13.789 [2024-11-20 21:12:31.806039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:13.789 [2024-11-20 21:12:31.806058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:13.789 [2024-11-20 21:12:31.806064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:13.789 [2024-11-20 21:12:31.806071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:13.789 [2024-11-20 21:12:31.806077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:13.789 [2024-11-20 21:12:31.806121] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:13.789 [2024-11-20 21:12:31.806128] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:13.789 [2024-11-20 21:12:31.806141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:13.789 [2024-11-20 21:12:31.806147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:13.789 [2024-11-20 21:12:31.806160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:13.789 [2024-11-20 21:12:31.806168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.806174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:13.789 [2024-11-20 21:12:31.806182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:30:13.789 [2024-11-20 21:12:31.806189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.812592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.812697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:13.789 [2024-11-20 21:12:31.812760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.358 ms 00:30:13.789 [2024-11-20 21:12:31.812779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.812826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.812900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:13.789 [2024-11-20 21:12:31.812919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:13.789 [2024-11-20 21:12:31.812936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.820376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.820474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:13.789 [2024-11-20 21:12:31.820513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.386 ms 00:30:13.789 [2024-11-20 21:12:31.820530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.820591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.820611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:13.789 [2024-11-20 21:12:31.820629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:13.789 [2024-11-20 21:12:31.820666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.820779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.820808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:13.789 [2024-11-20 21:12:31.820916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:30:13.789 [2024-11-20 21:12:31.820935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.820983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.821074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:13.789 [2024-11-20 21:12:31.821093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:13.789 [2024-11-20 21:12:31.821107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.825923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.826012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:13.789 [2024-11-20 21:12:31.826054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.787 ms 00:30:13.789 [2024-11-20 21:12:31.826071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.826154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.826234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:13.789 [2024-11-20 21:12:31.826253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:13.789 [2024-11-20 21:12:31.826270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.845147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.845442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:13.789 [2024-11-20 21:12:31.845593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.848 ms 00:30:13.789 [2024-11-20 21:12:31.845658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.848425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.848649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:13.789 [2024-11-20 21:12:31.848831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.726 ms 00:30:13.789 [2024-11-20 21:12:31.848980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.864666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.864789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:13.789 [2024-11-20 21:12:31.864882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.557 ms 00:30:13.789 [2024-11-20 21:12:31.864910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.789 [2024-11-20 21:12:31.865013] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:13.789 [2024-11-20 21:12:31.865102] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:13.789 [2024-11-20 21:12:31.865217] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:13.789 [2024-11-20 21:12:31.865304] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:13.789 [2024-11-20 21:12:31.865328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.789 [2024-11-20 21:12:31.865407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:13.790 [2024-11-20 21:12:31.865432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:30:13.790 [2024-11-20 21:12:31.865451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.790 [2024-11-20 21:12:31.865505] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:13.790 [2024-11-20 21:12:31.865540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.790 [2024-11-20 21:12:31.865555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:13.790 [2024-11-20 21:12:31.865597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:13.790 [2024-11-20 21:12:31.865614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.790 [2024-11-20 21:12:31.867668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.790 [2024-11-20 21:12:31.867766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:13.790 [2024-11-20 21:12:31.867810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.027 ms 00:30:13.790 [2024-11-20 21:12:31.867824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.790 [2024-11-20 21:12:31.868299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.790 [2024-11-20 21:12:31.868319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:13.790 [2024-11-20 21:12:31.868326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:13.790 [2024-11-20 21:12:31.868332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:13.790 [2024-11-20 21:12:31.868372] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:13.790 [2024-11-20 21:12:31.868495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:13.790 [2024-11-20 21:12:31.868502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:13.790 [2024-11-20 21:12:31.868508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.124 ms 00:30:13.790 [2024-11-20 21:12:31.868515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.735 [2024-11-20 21:12:32.551271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.735 [2024-11-20 21:12:32.551610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:14.735 [2024-11-20 21:12:32.551694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 682.525 ms 00:30:14.735 [2024-11-20 21:12:32.551722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.735 [2024-11-20 21:12:32.553704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.735 [2024-11-20 21:12:32.553893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:14.735 [2024-11-20 21:12:32.553987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.330 ms 00:30:14.735 [2024-11-20 21:12:32.554012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.735 [2024-11-20 21:12:32.554587] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:14.735 [2024-11-20 21:12:32.554790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.735 [2024-11-20 21:12:32.554856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:14.735 [2024-11-20 21:12:32.554928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.651 ms 00:30:14.735 [2024-11-20 21:12:32.554951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.735 [2024-11-20 21:12:32.555079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.735 [2024-11-20 21:12:32.555111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:14.735 [2024-11-20 21:12:32.555130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:14.735 [2024-11-20 21:12:32.555140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:14.735 [2024-11-20 21:12:32.555184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 686.803 ms, result 0 00:30:14.735 [2024-11-20 21:12:32.555239] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:14.735 [2024-11-20 21:12:32.555336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:14.735 [2024-11-20 21:12:32.555349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:14.735 [2024-11-20 21:12:32.555360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.098 ms 00:30:14.735 [2024-11-20 21:12:32.555367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.469180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.469281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:15.680 [2024-11-20 21:12:33.469299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 913.205 ms 00:30:15.680 [2024-11-20 21:12:33.469308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.471676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.471732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:15.680 [2024-11-20 21:12:33.471766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.766 ms 00:30:15.680 [2024-11-20 21:12:33.471776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.472821] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:15.680 [2024-11-20 21:12:33.472872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.472882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:15.680 [2024-11-20 21:12:33.472892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.060 ms 00:30:15.680 [2024-11-20 21:12:33.472900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.472942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.472953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:15.680 [2024-11-20 21:12:33.472962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:15.680 [2024-11-20 21:12:33.472970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.473010] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 917.771 ms, result 0 00:30:15.680 [2024-11-20 21:12:33.473061] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:15.680 [2024-11-20 21:12:33.473073] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:15.680 [2024-11-20 21:12:33.473084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.473093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:15.680 [2024-11-20 21:12:33.473102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1604.720 ms 00:30:15.680 [2024-11-20 21:12:33.473110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.473143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.473152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:15.680 [2024-11-20 21:12:33.473160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:15.680 [2024-11-20 21:12:33.473168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.484342] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:15.680 [2024-11-20 21:12:33.484499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.484512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:15.680 [2024-11-20 21:12:33.484529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.314 ms 00:30:15.680 [2024-11-20 21:12:33.484537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.485297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.485317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:15.680 [2024-11-20 21:12:33.485327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.675 ms 00:30:15.680 [2024-11-20 21:12:33.485335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.487581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.487800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:15.680 [2024-11-20 21:12:33.487828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.228 ms 00:30:15.680 [2024-11-20 21:12:33.487836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.487903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.487913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:15.680 [2024-11-20 21:12:33.487923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:15.680 [2024-11-20 21:12:33.487932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.488047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.488058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:15.680 [2024-11-20 21:12:33.488066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:15.680 [2024-11-20 21:12:33.488077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.488099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.488108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:15.680 [2024-11-20 21:12:33.488116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:15.680 [2024-11-20 21:12:33.488123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.488162] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:15.680 [2024-11-20 21:12:33.488172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.488181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:15.680 [2024-11-20 21:12:33.488189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:15.680 [2024-11-20 21:12:33.488196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.488253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:15.680 [2024-11-20 21:12:33.488263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:15.680 [2024-11-20 21:12:33.488271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:30:15.680 [2024-11-20 21:12:33.488279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:15.680 [2024-11-20 21:12:33.489477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1689.940 ms, result 0 00:30:15.681 [2024-11-20 21:12:33.505157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.681 [2024-11-20 21:12:33.521156] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:15.681 [2024-11-20 21:12:33.529294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:15.681 Validate MD5 checksum, iteration 1 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:15.681 21:12:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:15.942 [2024-11-20 21:12:33.811228] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:30:15.942 [2024-11-20 21:12:33.811488] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94558 ] 00:30:15.942 [2024-11-20 21:12:33.957046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.942 [2024-11-20 21:12:33.980730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:17.330  [2024-11-20T21:12:36.450Z] Copying: 551/1024 [MB] (551 MBps) [2024-11-20T21:12:37.019Z] Copying: 1024/1024 [MB] (average 537 MBps) 00:30:18.900 00:30:18.900 21:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:18.900 21:12:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=83c206189037ad737c4ac2036711e543 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 83c206189037ad737c4ac2036711e543 != \8\3\c\2\0\6\1\8\9\0\3\7\a\d\7\3\7\c\4\a\c\2\0\3\6\7\1\1\e\5\4\3 ]] 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:20.816 Validate MD5 checksum, iteration 2 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:20.816 21:12:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:20.816 [2024-11-20 21:12:38.875911] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:30:20.816 [2024-11-20 21:12:38.876149] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94614 ] 00:30:21.075 [2024-11-20 21:12:39.014398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.075 [2024-11-20 21:12:39.036768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.450  [2024-11-20T21:12:41.136Z] Copying: 610/1024 [MB] (610 MBps) [2024-11-20T21:12:41.705Z] Copying: 1024/1024 [MB] (average 622 MBps) 00:30:23.586 00:30:23.586 21:12:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:23.586 21:12:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=40871226f9cb05b62b305a151a24021d 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 40871226f9cb05b62b305a151a24021d != \4\0\8\7\1\2\2\6\f\9\c\b\0\5\b\6\2\b\3\0\5\a\1\5\1\a\2\4\0\2\1\d ]] 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94529 ]] 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94529 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94529 ']' 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94529 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94529 00:30:25.504 killing process with pid 94529 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94529' 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94529 00:30:25.504 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94529 00:30:25.504 [2024-11-20 21:12:43.374680] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:25.504 [2024-11-20 21:12:43.378074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.504 [2024-11-20 21:12:43.378106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:25.504 [2024-11-20 21:12:43.378116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:25.504 [2024-11-20 21:12:43.378123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.504 [2024-11-20 21:12:43.378140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:25.504 [2024-11-20 21:12:43.378508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.504 [2024-11-20 21:12:43.378526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:25.504 [2024-11-20 21:12:43.378533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:30:25.504 [2024-11-20 21:12:43.378542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.504 [2024-11-20 21:12:43.378735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.504 [2024-11-20 21:12:43.378757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:25.504 [2024-11-20 21:12:43.378764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:30:25.504 [2024-11-20 21:12:43.378769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.504 [2024-11-20 21:12:43.379880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.504 [2024-11-20 21:12:43.379902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:25.504 [2024-11-20 21:12:43.379909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.098 ms 00:30:25.504 [2024-11-20 21:12:43.379915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.504 [2024-11-20 21:12:43.380818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.504 [2024-11-20 21:12:43.380836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:25.504 [2024-11-20 21:12:43.380843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.877 ms 00:30:25.504 [2024-11-20 21:12:43.380849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.504 [2024-11-20 21:12:43.382244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.504 [2024-11-20 21:12:43.382273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:25.504 [2024-11-20 21:12:43.382280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.368 ms 00:30:25.504 [2024-11-20 21:12:43.382290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.504 [2024-11-20 21:12:43.383414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.504 [2024-11-20 21:12:43.383534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:25.504 [2024-11-20 21:12:43.383547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.098 ms 00:30:25.504 [2024-11-20 21:12:43.383553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.504 [2024-11-20 21:12:43.383614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.504 [2024-11-20 21:12:43.383628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:25.504 [2024-11-20 21:12:43.383635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:25.504 [2024-11-20 21:12:43.383640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.504 [2024-11-20 21:12:43.384988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.504 [2024-11-20 21:12:43.385016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:25.504 [2024-11-20 21:12:43.385022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.331 ms 00:30:25.504 [2024-11-20 21:12:43.385034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.504 [2024-11-20 21:12:43.386219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.504 [2024-11-20 21:12:43.386243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:25.504 [2024-11-20 21:12:43.386250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.159 ms 00:30:25.505 [2024-11-20 21:12:43.386255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.387230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.505 [2024-11-20 21:12:43.387256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:25.505 [2024-11-20 21:12:43.387263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.951 ms 00:30:25.505 [2024-11-20 21:12:43.387268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.388350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.505 [2024-11-20 21:12:43.388375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:25.505 [2024-11-20 21:12:43.388382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.036 ms 00:30:25.505 [2024-11-20 21:12:43.388387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.388410] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:25.505 [2024-11-20 21:12:43.388421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:25.505 [2024-11-20 21:12:43.388429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:25.505 [2024-11-20 21:12:43.388436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:25.505 [2024-11-20 21:12:43.388442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:25.505 [2024-11-20 21:12:43.388528] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:25.505 [2024-11-20 21:12:43.388534] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5c537862-7c69-4a14-9c45-e8599522a66b 00:30:25.505 [2024-11-20 21:12:43.388539] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:25.505 [2024-11-20 21:12:43.388545] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:25.505 [2024-11-20 21:12:43.388549] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:25.505 [2024-11-20 21:12:43.388555] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:25.505 [2024-11-20 21:12:43.388560] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:25.505 [2024-11-20 21:12:43.388566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:25.505 [2024-11-20 21:12:43.388571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:25.505 [2024-11-20 21:12:43.388575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:25.505 [2024-11-20 21:12:43.388580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:25.505 [2024-11-20 21:12:43.388588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.505 [2024-11-20 21:12:43.388597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:25.505 [2024-11-20 21:12:43.388604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:30:25.505 [2024-11-20 21:12:43.388610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.389842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.505 [2024-11-20 21:12:43.389940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:25.505 [2024-11-20 21:12:43.389952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.219 ms 00:30:25.505 [2024-11-20 21:12:43.389958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.390029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.505 [2024-11-20 21:12:43.390036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:25.505 [2024-11-20 21:12:43.390042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:30:25.505 [2024-11-20 21:12:43.390048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.394627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.394654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:25.505 [2024-11-20 21:12:43.394661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.394668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.394694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.394701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:25.505 [2024-11-20 21:12:43.394707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.394713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.394778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.394786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:25.505 [2024-11-20 21:12:43.394793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.394801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.394815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.394823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:25.505 [2024-11-20 21:12:43.394829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.394834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.402816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.402844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:25.505 [2024-11-20 21:12:43.402852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.402860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.408813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.408944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:25.505 [2024-11-20 21:12:43.408955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.408961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.409018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.409025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:25.505 [2024-11-20 21:12:43.409032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.409037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.409084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.409093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:25.505 [2024-11-20 21:12:43.409102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.409107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.409163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.409173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:25.505 [2024-11-20 21:12:43.409180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.409185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.409209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.409216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:25.505 [2024-11-20 21:12:43.409222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.409230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.409259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.409266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:25.505 [2024-11-20 21:12:43.409272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.409277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.409309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:25.505 [2024-11-20 21:12:43.409316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:25.505 [2024-11-20 21:12:43.409325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:25.505 [2024-11-20 21:12:43.409330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.505 [2024-11-20 21:12:43.409421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 31.327 ms, result 0 00:30:25.505 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:25.505 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.505 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:25.505 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:25.505 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:25.505 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.505 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:25.505 Remove shared memory files 00:30:25.505 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:25.506 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:25.506 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:25.506 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94319 00:30:25.506 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:25.506 21:12:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:25.506 ************************************ 00:30:25.506 END TEST ftl_upgrade_shutdown 00:30:25.506 ************************************ 00:30:25.506 00:30:25.506 real 1m9.518s 00:30:25.506 user 1m33.962s 00:30:25.506 sys 0m19.289s 00:30:25.506 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.506 21:12:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:25.506 21:12:43 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:25.506 21:12:43 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:25.506 21:12:43 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:25.506 21:12:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.506 21:12:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:25.766 ************************************ 00:30:25.766 START TEST ftl_restore_fast 00:30:25.766 ************************************ 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:25.766 * Looking for test storage... 00:30:25.766 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.766 --rc genhtml_branch_coverage=1 00:30:25.766 --rc genhtml_function_coverage=1 00:30:25.766 --rc genhtml_legend=1 00:30:25.766 --rc geninfo_all_blocks=1 00:30:25.766 --rc geninfo_unexecuted_blocks=1 00:30:25.766 00:30:25.766 ' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.766 --rc genhtml_branch_coverage=1 00:30:25.766 --rc genhtml_function_coverage=1 00:30:25.766 --rc genhtml_legend=1 00:30:25.766 --rc geninfo_all_blocks=1 00:30:25.766 --rc geninfo_unexecuted_blocks=1 00:30:25.766 00:30:25.766 ' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.766 --rc genhtml_branch_coverage=1 00:30:25.766 --rc genhtml_function_coverage=1 00:30:25.766 --rc genhtml_legend=1 00:30:25.766 --rc geninfo_all_blocks=1 00:30:25.766 --rc geninfo_unexecuted_blocks=1 00:30:25.766 00:30:25.766 ' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:25.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:25.766 --rc genhtml_branch_coverage=1 00:30:25.766 --rc genhtml_function_coverage=1 00:30:25.766 --rc genhtml_legend=1 00:30:25.766 --rc geninfo_all_blocks=1 00:30:25.766 --rc geninfo_unexecuted_blocks=1 00:30:25.766 00:30:25.766 ' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.pR5jJ3eExY 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:25.766 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94746 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94746 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 94746 ']' 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.767 21:12:43 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:25.767 [2024-11-20 21:12:43.868093] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:30:25.767 [2024-11-20 21:12:43.868328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94746 ] 00:30:26.027 [2024-11-20 21:12:44.010059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.027 [2024-11-20 21:12:44.026922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.600 21:12:44 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.600 21:12:44 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:26.600 21:12:44 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:26.600 21:12:44 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:26.600 21:12:44 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:26.600 21:12:44 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:26.600 21:12:44 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:26.600 21:12:44 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:26.861 21:12:44 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:26.861 21:12:44 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:26.861 21:12:44 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:26.861 21:12:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:26.861 21:12:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:26.861 21:12:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:26.861 21:12:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:26.861 21:12:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:27.122 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:27.123 { 00:30:27.123 "name": "nvme0n1", 00:30:27.123 "aliases": [ 00:30:27.123 "72e4fb27-82ec-4436-aae6-437eed8fb19d" 00:30:27.123 ], 00:30:27.123 "product_name": "NVMe disk", 00:30:27.123 "block_size": 4096, 00:30:27.123 "num_blocks": 1310720, 00:30:27.123 "uuid": "72e4fb27-82ec-4436-aae6-437eed8fb19d", 00:30:27.123 "numa_id": -1, 00:30:27.123 "assigned_rate_limits": { 00:30:27.123 "rw_ios_per_sec": 0, 00:30:27.123 "rw_mbytes_per_sec": 0, 00:30:27.123 "r_mbytes_per_sec": 0, 00:30:27.123 "w_mbytes_per_sec": 0 00:30:27.123 }, 00:30:27.123 "claimed": true, 00:30:27.123 "claim_type": "read_many_write_one", 00:30:27.123 "zoned": false, 00:30:27.123 "supported_io_types": { 00:30:27.123 "read": true, 00:30:27.123 "write": true, 00:30:27.123 "unmap": true, 00:30:27.123 "flush": true, 00:30:27.123 "reset": true, 00:30:27.123 "nvme_admin": true, 00:30:27.123 "nvme_io": true, 00:30:27.123 "nvme_io_md": false, 00:30:27.123 "write_zeroes": true, 00:30:27.123 "zcopy": false, 00:30:27.123 "get_zone_info": false, 00:30:27.123 "zone_management": false, 00:30:27.123 "zone_append": false, 00:30:27.123 "compare": true, 00:30:27.123 "compare_and_write": false, 00:30:27.123 "abort": true, 00:30:27.123 "seek_hole": false, 00:30:27.123 "seek_data": false, 00:30:27.123 "copy": true, 00:30:27.123 "nvme_iov_md": false 00:30:27.123 }, 00:30:27.123 "driver_specific": { 00:30:27.123 "nvme": [ 00:30:27.123 { 00:30:27.123 "pci_address": "0000:00:11.0", 00:30:27.123 "trid": { 00:30:27.123 "trtype": "PCIe", 00:30:27.123 "traddr": "0000:00:11.0" 00:30:27.123 }, 00:30:27.123 "ctrlr_data": { 00:30:27.123 "cntlid": 0, 00:30:27.123 "vendor_id": "0x1b36", 00:30:27.123 "model_number": "QEMU NVMe Ctrl", 00:30:27.123 "serial_number": "12341", 00:30:27.123 "firmware_revision": "8.0.0", 00:30:27.123 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:27.123 "oacs": { 00:30:27.123 "security": 0, 00:30:27.123 "format": 1, 00:30:27.123 "firmware": 0, 00:30:27.123 "ns_manage": 1 00:30:27.123 }, 00:30:27.123 "multi_ctrlr": false, 00:30:27.123 "ana_reporting": false 00:30:27.123 }, 00:30:27.123 "vs": { 00:30:27.123 "nvme_version": "1.4" 00:30:27.123 }, 00:30:27.123 "ns_data": { 00:30:27.123 "id": 1, 00:30:27.123 "can_share": false 00:30:27.123 } 00:30:27.123 } 00:30:27.123 ], 00:30:27.123 "mp_policy": "active_passive" 00:30:27.123 } 00:30:27.123 } 00:30:27.123 ]' 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:27.123 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:27.384 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=bb7730c5-f74c-43df-abfd-b115eb382078 00:30:27.384 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:27.384 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bb7730c5-f74c-43df-abfd-b115eb382078 00:30:27.645 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:27.645 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=0f57cb06-7b33-49a3-b306-0b43b406d1df 00:30:27.645 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0f57cb06-7b33-49a3-b306-0b43b406d1df 00:30:27.905 21:12:45 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=79356136-314c-44c6-a5b6-8a4ebec38666 00:30:27.905 21:12:45 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:27.905 21:12:45 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 79356136-314c-44c6-a5b6-8a4ebec38666 00:30:27.905 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:27.905 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:27.905 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=79356136-314c-44c6-a5b6-8a4ebec38666 00:30:27.905 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:27.905 21:12:45 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 79356136-314c-44c6-a5b6-8a4ebec38666 00:30:27.905 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=79356136-314c-44c6-a5b6-8a4ebec38666 00:30:27.905 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:27.906 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:27.906 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:27.906 21:12:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 79356136-314c-44c6-a5b6-8a4ebec38666 00:30:28.167 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:28.167 { 00:30:28.167 "name": "79356136-314c-44c6-a5b6-8a4ebec38666", 00:30:28.167 "aliases": [ 00:30:28.167 "lvs/nvme0n1p0" 00:30:28.167 ], 00:30:28.167 "product_name": "Logical Volume", 00:30:28.167 "block_size": 4096, 00:30:28.167 "num_blocks": 26476544, 00:30:28.167 "uuid": "79356136-314c-44c6-a5b6-8a4ebec38666", 00:30:28.167 "assigned_rate_limits": { 00:30:28.167 "rw_ios_per_sec": 0, 00:30:28.167 "rw_mbytes_per_sec": 0, 00:30:28.167 "r_mbytes_per_sec": 0, 00:30:28.167 "w_mbytes_per_sec": 0 00:30:28.167 }, 00:30:28.167 "claimed": false, 00:30:28.167 "zoned": false, 00:30:28.167 "supported_io_types": { 00:30:28.167 "read": true, 00:30:28.167 "write": true, 00:30:28.167 "unmap": true, 00:30:28.167 "flush": false, 00:30:28.167 "reset": true, 00:30:28.167 "nvme_admin": false, 00:30:28.167 "nvme_io": false, 00:30:28.167 "nvme_io_md": false, 00:30:28.167 "write_zeroes": true, 00:30:28.167 "zcopy": false, 00:30:28.167 "get_zone_info": false, 00:30:28.167 "zone_management": false, 00:30:28.167 "zone_append": false, 00:30:28.167 "compare": false, 00:30:28.167 "compare_and_write": false, 00:30:28.167 "abort": false, 00:30:28.167 "seek_hole": true, 00:30:28.167 "seek_data": true, 00:30:28.167 "copy": false, 00:30:28.167 "nvme_iov_md": false 00:30:28.167 }, 00:30:28.167 "driver_specific": { 00:30:28.167 "lvol": { 00:30:28.167 "lvol_store_uuid": "0f57cb06-7b33-49a3-b306-0b43b406d1df", 00:30:28.167 "base_bdev": "nvme0n1", 00:30:28.167 "thin_provision": true, 00:30:28.167 "num_allocated_clusters": 0, 00:30:28.167 "snapshot": false, 00:30:28.167 "clone": false, 00:30:28.167 "esnap_clone": false 00:30:28.167 } 00:30:28.167 } 00:30:28.167 } 00:30:28.167 ]' 00:30:28.167 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:28.167 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:28.167 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:28.167 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:28.167 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:28.167 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:28.167 21:12:46 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:28.167 21:12:46 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:28.167 21:12:46 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:28.428 21:12:46 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:28.428 21:12:46 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:28.428 21:12:46 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 79356136-314c-44c6-a5b6-8a4ebec38666 00:30:28.428 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=79356136-314c-44c6-a5b6-8a4ebec38666 00:30:28.428 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:28.429 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:28.429 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:28.429 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 79356136-314c-44c6-a5b6-8a4ebec38666 00:30:28.690 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:28.690 { 00:30:28.690 "name": "79356136-314c-44c6-a5b6-8a4ebec38666", 00:30:28.690 "aliases": [ 00:30:28.690 "lvs/nvme0n1p0" 00:30:28.690 ], 00:30:28.690 "product_name": "Logical Volume", 00:30:28.690 "block_size": 4096, 00:30:28.690 "num_blocks": 26476544, 00:30:28.690 "uuid": "79356136-314c-44c6-a5b6-8a4ebec38666", 00:30:28.690 "assigned_rate_limits": { 00:30:28.690 "rw_ios_per_sec": 0, 00:30:28.690 "rw_mbytes_per_sec": 0, 00:30:28.690 "r_mbytes_per_sec": 0, 00:30:28.690 "w_mbytes_per_sec": 0 00:30:28.690 }, 00:30:28.690 "claimed": false, 00:30:28.690 "zoned": false, 00:30:28.690 "supported_io_types": { 00:30:28.690 "read": true, 00:30:28.690 "write": true, 00:30:28.690 "unmap": true, 00:30:28.690 "flush": false, 00:30:28.690 "reset": true, 00:30:28.690 "nvme_admin": false, 00:30:28.690 "nvme_io": false, 00:30:28.690 "nvme_io_md": false, 00:30:28.690 "write_zeroes": true, 00:30:28.690 "zcopy": false, 00:30:28.690 "get_zone_info": false, 00:30:28.690 "zone_management": false, 00:30:28.690 "zone_append": false, 00:30:28.690 "compare": false, 00:30:28.690 "compare_and_write": false, 00:30:28.690 "abort": false, 00:30:28.690 "seek_hole": true, 00:30:28.690 "seek_data": true, 00:30:28.690 "copy": false, 00:30:28.690 "nvme_iov_md": false 00:30:28.690 }, 00:30:28.691 "driver_specific": { 00:30:28.691 "lvol": { 00:30:28.691 "lvol_store_uuid": "0f57cb06-7b33-49a3-b306-0b43b406d1df", 00:30:28.691 "base_bdev": "nvme0n1", 00:30:28.691 "thin_provision": true, 00:30:28.691 "num_allocated_clusters": 0, 00:30:28.691 "snapshot": false, 00:30:28.691 "clone": false, 00:30:28.691 "esnap_clone": false 00:30:28.691 } 00:30:28.691 } 00:30:28.691 } 00:30:28.691 ]' 00:30:28.691 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:28.691 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:28.691 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:28.691 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:28.691 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:28.691 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:28.691 21:12:46 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:28.691 21:12:46 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:28.952 21:12:46 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:28.952 21:12:46 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 79356136-314c-44c6-a5b6-8a4ebec38666 00:30:28.952 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=79356136-314c-44c6-a5b6-8a4ebec38666 00:30:28.952 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:28.952 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:28.952 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:28.952 21:12:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 79356136-314c-44c6-a5b6-8a4ebec38666 00:30:29.213 21:12:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:29.213 { 00:30:29.213 "name": "79356136-314c-44c6-a5b6-8a4ebec38666", 00:30:29.213 "aliases": [ 00:30:29.213 "lvs/nvme0n1p0" 00:30:29.213 ], 00:30:29.213 "product_name": "Logical Volume", 00:30:29.213 "block_size": 4096, 00:30:29.213 "num_blocks": 26476544, 00:30:29.213 "uuid": "79356136-314c-44c6-a5b6-8a4ebec38666", 00:30:29.213 "assigned_rate_limits": { 00:30:29.213 "rw_ios_per_sec": 0, 00:30:29.213 "rw_mbytes_per_sec": 0, 00:30:29.213 "r_mbytes_per_sec": 0, 00:30:29.213 "w_mbytes_per_sec": 0 00:30:29.213 }, 00:30:29.213 "claimed": false, 00:30:29.213 "zoned": false, 00:30:29.213 "supported_io_types": { 00:30:29.213 "read": true, 00:30:29.213 "write": true, 00:30:29.213 "unmap": true, 00:30:29.213 "flush": false, 00:30:29.213 "reset": true, 00:30:29.213 "nvme_admin": false, 00:30:29.213 "nvme_io": false, 00:30:29.213 "nvme_io_md": false, 00:30:29.213 "write_zeroes": true, 00:30:29.213 "zcopy": false, 00:30:29.213 "get_zone_info": false, 00:30:29.213 "zone_management": false, 00:30:29.213 "zone_append": false, 00:30:29.213 "compare": false, 00:30:29.213 "compare_and_write": false, 00:30:29.213 "abort": false, 00:30:29.213 "seek_hole": true, 00:30:29.213 "seek_data": true, 00:30:29.213 "copy": false, 00:30:29.213 "nvme_iov_md": false 00:30:29.213 }, 00:30:29.213 "driver_specific": { 00:30:29.213 "lvol": { 00:30:29.214 "lvol_store_uuid": "0f57cb06-7b33-49a3-b306-0b43b406d1df", 00:30:29.214 "base_bdev": "nvme0n1", 00:30:29.214 "thin_provision": true, 00:30:29.214 "num_allocated_clusters": 0, 00:30:29.214 "snapshot": false, 00:30:29.214 "clone": false, 00:30:29.214 "esnap_clone": false 00:30:29.214 } 00:30:29.214 } 00:30:29.214 } 00:30:29.214 ]' 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 79356136-314c-44c6-a5b6-8a4ebec38666 --l2p_dram_limit 10' 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:29.214 21:12:47 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 79356136-314c-44c6-a5b6-8a4ebec38666 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:29.476 [2024-11-20 21:12:47.396292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.396334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:29.476 [2024-11-20 21:12:47.396345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:29.476 [2024-11-20 21:12:47.396353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.396398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.396407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:29.476 [2024-11-20 21:12:47.396416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:30:29.476 [2024-11-20 21:12:47.396424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.396439] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:29.476 [2024-11-20 21:12:47.396650] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:29.476 [2024-11-20 21:12:47.396665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.396673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:29.476 [2024-11-20 21:12:47.396680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:30:29.476 [2024-11-20 21:12:47.396690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.396739] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5f11f99b-a93a-40e6-9e26-852bcbc4a86c 00:30:29.476 [2024-11-20 21:12:47.397716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.397737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:29.476 [2024-11-20 21:12:47.397756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:29.476 [2024-11-20 21:12:47.397763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.402462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.402487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:29.476 [2024-11-20 21:12:47.402496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.662 ms 00:30:29.476 [2024-11-20 21:12:47.402506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.402566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.402574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:29.476 [2024-11-20 21:12:47.402582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:29.476 [2024-11-20 21:12:47.402588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.402625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.402635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:29.476 [2024-11-20 21:12:47.402642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:29.476 [2024-11-20 21:12:47.402648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.402666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:29.476 [2024-11-20 21:12:47.403937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.404054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:29.476 [2024-11-20 21:12:47.404066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:30:29.476 [2024-11-20 21:12:47.404074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.404104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.404112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:29.476 [2024-11-20 21:12:47.404118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:29.476 [2024-11-20 21:12:47.404127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.404146] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:29.476 [2024-11-20 21:12:47.404253] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:29.476 [2024-11-20 21:12:47.404262] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:29.476 [2024-11-20 21:12:47.404272] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:29.476 [2024-11-20 21:12:47.404280] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:29.476 [2024-11-20 21:12:47.404288] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:29.476 [2024-11-20 21:12:47.404296] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:29.476 [2024-11-20 21:12:47.404304] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:29.476 [2024-11-20 21:12:47.404313] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:29.476 [2024-11-20 21:12:47.404322] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:29.476 [2024-11-20 21:12:47.404328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.404335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:29.476 [2024-11-20 21:12:47.404340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:30:29.476 [2024-11-20 21:12:47.404348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.404411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.476 [2024-11-20 21:12:47.404421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:29.476 [2024-11-20 21:12:47.404427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:29.476 [2024-11-20 21:12:47.404433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.476 [2024-11-20 21:12:47.404506] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:29.476 [2024-11-20 21:12:47.404515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:29.476 [2024-11-20 21:12:47.404524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:29.476 [2024-11-20 21:12:47.404531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.476 [2024-11-20 21:12:47.404537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:29.476 [2024-11-20 21:12:47.404543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:29.476 [2024-11-20 21:12:47.404548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:29.476 [2024-11-20 21:12:47.404554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:29.476 [2024-11-20 21:12:47.404560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:29.476 [2024-11-20 21:12:47.404567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:29.476 [2024-11-20 21:12:47.404572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:29.476 [2024-11-20 21:12:47.404578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:29.476 [2024-11-20 21:12:47.404584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:29.476 [2024-11-20 21:12:47.404592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:29.476 [2024-11-20 21:12:47.404598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:29.476 [2024-11-20 21:12:47.404606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.476 [2024-11-20 21:12:47.404611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:29.477 [2024-11-20 21:12:47.404617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:29.477 [2024-11-20 21:12:47.404622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.477 [2024-11-20 21:12:47.404628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:29.477 [2024-11-20 21:12:47.404635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:29.477 [2024-11-20 21:12:47.404642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.477 [2024-11-20 21:12:47.404648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:29.477 [2024-11-20 21:12:47.404655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:29.477 [2024-11-20 21:12:47.404661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.477 [2024-11-20 21:12:47.404668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:29.477 [2024-11-20 21:12:47.404673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:29.477 [2024-11-20 21:12:47.404680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.477 [2024-11-20 21:12:47.404686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:29.477 [2024-11-20 21:12:47.404694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:29.477 [2024-11-20 21:12:47.404700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:29.477 [2024-11-20 21:12:47.404708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:29.477 [2024-11-20 21:12:47.404713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:29.477 [2024-11-20 21:12:47.404720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:29.477 [2024-11-20 21:12:47.404726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:29.477 [2024-11-20 21:12:47.404734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:29.477 [2024-11-20 21:12:47.404739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:29.477 [2024-11-20 21:12:47.404756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:29.477 [2024-11-20 21:12:47.404762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:29.477 [2024-11-20 21:12:47.404769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.477 [2024-11-20 21:12:47.404774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:29.477 [2024-11-20 21:12:47.404781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:29.477 [2024-11-20 21:12:47.404787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.477 [2024-11-20 21:12:47.404794] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:29.477 [2024-11-20 21:12:47.404800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:29.477 [2024-11-20 21:12:47.404809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:29.477 [2024-11-20 21:12:47.404819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:29.477 [2024-11-20 21:12:47.404828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:29.477 [2024-11-20 21:12:47.404835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:29.477 [2024-11-20 21:12:47.404842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:29.477 [2024-11-20 21:12:47.404848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:29.477 [2024-11-20 21:12:47.404855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:29.477 [2024-11-20 21:12:47.404861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:29.477 [2024-11-20 21:12:47.404871] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:29.477 [2024-11-20 21:12:47.404881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:29.477 [2024-11-20 21:12:47.404889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:29.477 [2024-11-20 21:12:47.404895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:29.477 [2024-11-20 21:12:47.404904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:29.477 [2024-11-20 21:12:47.404910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:29.477 [2024-11-20 21:12:47.404917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:29.477 [2024-11-20 21:12:47.404923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:29.477 [2024-11-20 21:12:47.404933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:29.477 [2024-11-20 21:12:47.404939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:29.477 [2024-11-20 21:12:47.404946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:29.477 [2024-11-20 21:12:47.404952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:29.477 [2024-11-20 21:12:47.404959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:29.477 [2024-11-20 21:12:47.404966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:29.477 [2024-11-20 21:12:47.404973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:29.477 [2024-11-20 21:12:47.404980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:29.477 [2024-11-20 21:12:47.404987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:29.477 [2024-11-20 21:12:47.404994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:29.477 [2024-11-20 21:12:47.405002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:29.477 [2024-11-20 21:12:47.405008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:29.477 [2024-11-20 21:12:47.405016] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:29.477 [2024-11-20 21:12:47.405022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:29.477 [2024-11-20 21:12:47.405029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:29.477 [2024-11-20 21:12:47.405038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:29.477 [2024-11-20 21:12:47.405049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:30:29.477 [2024-11-20 21:12:47.405058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:29.477 [2024-11-20 21:12:47.405089] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:29.477 [2024-11-20 21:12:47.405095] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:33.686 [2024-11-20 21:12:51.014480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.014554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:33.686 [2024-11-20 21:12:51.014572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3609.370 ms 00:30:33.686 [2024-11-20 21:12:51.014581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.025838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.025885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:33.686 [2024-11-20 21:12:51.025900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.154 ms 00:30:33.686 [2024-11-20 21:12:51.025909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.026036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.026050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:33.686 [2024-11-20 21:12:51.026067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:30:33.686 [2024-11-20 21:12:51.026074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.036659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.036703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:33.686 [2024-11-20 21:12:51.036721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.531 ms 00:30:33.686 [2024-11-20 21:12:51.036729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.036783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.036793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:33.686 [2024-11-20 21:12:51.036803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:33.686 [2024-11-20 21:12:51.036810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.037241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.037263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:33.686 [2024-11-20 21:12:51.037275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:30:33.686 [2024-11-20 21:12:51.037287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.037411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.037424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:33.686 [2024-11-20 21:12:51.037437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:30:33.686 [2024-11-20 21:12:51.037446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.044311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.044457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:33.686 [2024-11-20 21:12:51.044524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.840 ms 00:30:33.686 [2024-11-20 21:12:51.044548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.053594] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:33.686 [2024-11-20 21:12:51.057129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.057268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:33.686 [2024-11-20 21:12:51.057325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.498 ms 00:30:33.686 [2024-11-20 21:12:51.057351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.159500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.159776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:33.686 [2024-11-20 21:12:51.159872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.103 ms 00:30:33.686 [2024-11-20 21:12:51.159907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.160360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.160487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:33.686 [2024-11-20 21:12:51.160627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.173 ms 00:30:33.686 [2024-11-20 21:12:51.160659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.167500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.167699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:33.686 [2024-11-20 21:12:51.167723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.735 ms 00:30:33.686 [2024-11-20 21:12:51.167734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.173864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.174071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:33.686 [2024-11-20 21:12:51.174092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.967 ms 00:30:33.686 [2024-11-20 21:12:51.174101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.686 [2024-11-20 21:12:51.174429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.686 [2024-11-20 21:12:51.174444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:33.686 [2024-11-20 21:12:51.174454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:30:33.687 [2024-11-20 21:12:51.174467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.223801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.223862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:33.687 [2024-11-20 21:12:51.223885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.288 ms 00:30:33.687 [2024-11-20 21:12:51.223896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.231660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.231724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:33.687 [2024-11-20 21:12:51.231736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.682 ms 00:30:33.687 [2024-11-20 21:12:51.231769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.238682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.238743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:33.687 [2024-11-20 21:12:51.238782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.860 ms 00:30:33.687 [2024-11-20 21:12:51.238792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.245820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.245878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:33.687 [2024-11-20 21:12:51.245890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.976 ms 00:30:33.687 [2024-11-20 21:12:51.245903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.245971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.245984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:33.687 [2024-11-20 21:12:51.245994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:33.687 [2024-11-20 21:12:51.246005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.246102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.246114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:33.687 [2024-11-20 21:12:51.246123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:33.687 [2024-11-20 21:12:51.246137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.247308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3850.525 ms, result 0 00:30:33.687 { 00:30:33.687 "name": "ftl0", 00:30:33.687 "uuid": "5f11f99b-a93a-40e6-9e26-852bcbc4a86c" 00:30:33.687 } 00:30:33.687 21:12:51 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:33.687 21:12:51 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:33.687 21:12:51 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:33.687 21:12:51 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:33.687 [2024-11-20 21:12:51.698713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.698796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:33.687 [2024-11-20 21:12:51.698816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:33.687 [2024-11-20 21:12:51.698825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.698859] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:33.687 [2024-11-20 21:12:51.699635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.699693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:33.687 [2024-11-20 21:12:51.699705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.758 ms 00:30:33.687 [2024-11-20 21:12:51.699720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.700011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.700027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:33.687 [2024-11-20 21:12:51.700041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:30:33.687 [2024-11-20 21:12:51.700055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.703305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.703334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:33.687 [2024-11-20 21:12:51.703345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.234 ms 00:30:33.687 [2024-11-20 21:12:51.703355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.709855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.709908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:33.687 [2024-11-20 21:12:51.709925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.481 ms 00:30:33.687 [2024-11-20 21:12:51.709963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.713110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.713329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:33.687 [2024-11-20 21:12:51.713348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.048 ms 00:30:33.687 [2024-11-20 21:12:51.713359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.721241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.721319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:33.687 [2024-11-20 21:12:51.721333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.509 ms 00:30:33.687 [2024-11-20 21:12:51.721344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.721489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.721503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:33.687 [2024-11-20 21:12:51.721516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:30:33.687 [2024-11-20 21:12:51.721526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.725150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.725213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:33.687 [2024-11-20 21:12:51.725223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.603 ms 00:30:33.687 [2024-11-20 21:12:51.725233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.728385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.728607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:33.687 [2024-11-20 21:12:51.728628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:30:33.687 [2024-11-20 21:12:51.728638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.731239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.731304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:33.687 [2024-11-20 21:12:51.731321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.455 ms 00:30:33.687 [2024-11-20 21:12:51.731331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.733784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.687 [2024-11-20 21:12:51.733842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:33.687 [2024-11-20 21:12:51.733851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.353 ms 00:30:33.687 [2024-11-20 21:12:51.733862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.687 [2024-11-20 21:12:51.733908] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:33.687 [2024-11-20 21:12:51.733927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:33.687 [2024-11-20 21:12:51.733964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:33.687 [2024-11-20 21:12:51.733976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:33.687 [2024-11-20 21:12:51.733984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:33.687 [2024-11-20 21:12:51.733998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:33.687 [2024-11-20 21:12:51.734005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:33.687 [2024-11-20 21:12:51.734016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:33.687 [2024-11-20 21:12:51.734023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:33.687 [2024-11-20 21:12:51.734032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:33.687 [2024-11-20 21:12:51.734040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:33.687 [2024-11-20 21:12:51.734049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:33.688 [2024-11-20 21:12:51.734675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:33.689 [2024-11-20 21:12:51.734896] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:33.689 [2024-11-20 21:12:51.734905] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f11f99b-a93a-40e6-9e26-852bcbc4a86c 00:30:33.689 [2024-11-20 21:12:51.734917] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:33.689 [2024-11-20 21:12:51.734924] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:33.689 [2024-11-20 21:12:51.734934] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:33.689 [2024-11-20 21:12:51.734948] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:33.689 [2024-11-20 21:12:51.734957] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:33.689 [2024-11-20 21:12:51.734969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:33.689 [2024-11-20 21:12:51.734979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:33.689 [2024-11-20 21:12:51.734985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:33.689 [2024-11-20 21:12:51.734994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:33.689 [2024-11-20 21:12:51.735001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.689 [2024-11-20 21:12:51.735011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:33.689 [2024-11-20 21:12:51.735020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:30:33.689 [2024-11-20 21:12:51.735031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.737456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.689 [2024-11-20 21:12:51.737496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:33.689 [2024-11-20 21:12:51.737507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.401 ms 00:30:33.689 [2024-11-20 21:12:51.737520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.737646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.689 [2024-11-20 21:12:51.737658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:33.689 [2024-11-20 21:12:51.737674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:30:33.689 [2024-11-20 21:12:51.737683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.746209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.746266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:33.689 [2024-11-20 21:12:51.746279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.746290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.746359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.746369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:33.689 [2024-11-20 21:12:51.746378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.746388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.746471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.746487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:33.689 [2024-11-20 21:12:51.746496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.746508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.746525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.746536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:33.689 [2024-11-20 21:12:51.746543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.746553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.760394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.760455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:33.689 [2024-11-20 21:12:51.760469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.760480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.771026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.771084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:33.689 [2024-11-20 21:12:51.771095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.771105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.771200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.771215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:33.689 [2024-11-20 21:12:51.771224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.771234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.771285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.771297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:33.689 [2024-11-20 21:12:51.771305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.771315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.771387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.771399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:33.689 [2024-11-20 21:12:51.771407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.771417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.771448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.771463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:33.689 [2024-11-20 21:12:51.771471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.771480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.689 [2024-11-20 21:12:51.771520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.689 [2024-11-20 21:12:51.771533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:33.689 [2024-11-20 21:12:51.771541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.689 [2024-11-20 21:12:51.771551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.690 [2024-11-20 21:12:51.771608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:33.690 [2024-11-20 21:12:51.771621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:33.690 [2024-11-20 21:12:51.771629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:33.690 [2024-11-20 21:12:51.771639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.690 [2024-11-20 21:12:51.771809] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.031 ms, result 0 00:30:33.690 true 00:30:33.690 21:12:51 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94746 00:30:33.690 21:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94746 ']' 00:30:33.690 21:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94746 00:30:33.690 21:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:33.690 21:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:33.951 21:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94746 00:30:33.951 killing process with pid 94746 00:30:33.951 21:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:33.951 21:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:33.951 21:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94746' 00:30:33.951 21:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 94746 00:30:33.951 21:12:51 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 94746 00:30:38.160 21:12:55 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:42.367 262144+0 records in 00:30:42.367 262144+0 records out 00:30:42.367 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.70142 s, 290 MB/s 00:30:42.367 21:12:59 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:43.753 21:13:01 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:43.753 [2024-11-20 21:13:01.840657] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:30:43.753 [2024-11-20 21:13:01.840800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94950 ] 00:30:44.014 [2024-11-20 21:13:01.982710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.014 [2024-11-20 21:13:02.003493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.014 [2024-11-20 21:13:02.095148] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:44.014 [2024-11-20 21:13:02.095220] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:44.277 [2024-11-20 21:13:02.254196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.254442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:44.277 [2024-11-20 21:13:02.254468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:44.277 [2024-11-20 21:13:02.254477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.254560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.254571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:44.277 [2024-11-20 21:13:02.254581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:44.277 [2024-11-20 21:13:02.254594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.254622] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:44.277 [2024-11-20 21:13:02.254972] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:44.277 [2024-11-20 21:13:02.254999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.255010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:44.277 [2024-11-20 21:13:02.255025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:30:44.277 [2024-11-20 21:13:02.255038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.256726] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:44.277 [2024-11-20 21:13:02.260421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.260479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:44.277 [2024-11-20 21:13:02.260491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:30:44.277 [2024-11-20 21:13:02.260509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.260596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.260609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:44.277 [2024-11-20 21:13:02.260618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:44.277 [2024-11-20 21:13:02.260626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.269033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.269083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:44.277 [2024-11-20 21:13:02.269097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.364 ms 00:30:44.277 [2024-11-20 21:13:02.269111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.269215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.269225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:44.277 [2024-11-20 21:13:02.269234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:30:44.277 [2024-11-20 21:13:02.269244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.269307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.269317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:44.277 [2024-11-20 21:13:02.269326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:44.277 [2024-11-20 21:13:02.269333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.269359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:44.277 [2024-11-20 21:13:02.271496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.271536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:44.277 [2024-11-20 21:13:02.271546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:30:44.277 [2024-11-20 21:13:02.271562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.271597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.271605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:44.277 [2024-11-20 21:13:02.271614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:44.277 [2024-11-20 21:13:02.271621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.271649] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:44.277 [2024-11-20 21:13:02.271672] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:44.277 [2024-11-20 21:13:02.271716] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:44.277 [2024-11-20 21:13:02.271732] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:44.277 [2024-11-20 21:13:02.271858] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:44.277 [2024-11-20 21:13:02.271870] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:44.277 [2024-11-20 21:13:02.271881] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:44.277 [2024-11-20 21:13:02.271895] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:44.277 [2024-11-20 21:13:02.271904] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:44.277 [2024-11-20 21:13:02.271912] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:44.277 [2024-11-20 21:13:02.271920] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:44.277 [2024-11-20 21:13:02.271928] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:44.277 [2024-11-20 21:13:02.271936] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:44.277 [2024-11-20 21:13:02.271944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.271951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:44.277 [2024-11-20 21:13:02.271959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:30:44.277 [2024-11-20 21:13:02.271969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.272051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.277 [2024-11-20 21:13:02.272063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:44.277 [2024-11-20 21:13:02.272071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:44.277 [2024-11-20 21:13:02.272078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.277 [2024-11-20 21:13:02.272179] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:44.277 [2024-11-20 21:13:02.272197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:44.277 [2024-11-20 21:13:02.272210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:44.277 [2024-11-20 21:13:02.272220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.277 [2024-11-20 21:13:02.272228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:44.277 [2024-11-20 21:13:02.272236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:44.277 [2024-11-20 21:13:02.272244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:44.277 [2024-11-20 21:13:02.272252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:44.277 [2024-11-20 21:13:02.272261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:44.277 [2024-11-20 21:13:02.272269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:44.277 [2024-11-20 21:13:02.272282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:44.277 [2024-11-20 21:13:02.272290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:44.277 [2024-11-20 21:13:02.272297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:44.277 [2024-11-20 21:13:02.272305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:44.277 [2024-11-20 21:13:02.272313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:44.277 [2024-11-20 21:13:02.272321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.277 [2024-11-20 21:13:02.272330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:44.277 [2024-11-20 21:13:02.272337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:44.277 [2024-11-20 21:13:02.272345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.277 [2024-11-20 21:13:02.272353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:44.277 [2024-11-20 21:13:02.272361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:44.277 [2024-11-20 21:13:02.272368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.277 [2024-11-20 21:13:02.272376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:44.277 [2024-11-20 21:13:02.272384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:44.277 [2024-11-20 21:13:02.272391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.277 [2024-11-20 21:13:02.272399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:44.277 [2024-11-20 21:13:02.272411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:44.277 [2024-11-20 21:13:02.272418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.277 [2024-11-20 21:13:02.272426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:44.277 [2024-11-20 21:13:02.272433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:44.277 [2024-11-20 21:13:02.272440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:44.277 [2024-11-20 21:13:02.272447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:44.277 [2024-11-20 21:13:02.272455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:44.278 [2024-11-20 21:13:02.272463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:44.278 [2024-11-20 21:13:02.272471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:44.278 [2024-11-20 21:13:02.272478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:44.278 [2024-11-20 21:13:02.272486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:44.278 [2024-11-20 21:13:02.272493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:44.278 [2024-11-20 21:13:02.272500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:44.278 [2024-11-20 21:13:02.272507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.278 [2024-11-20 21:13:02.272515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:44.278 [2024-11-20 21:13:02.272522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:44.278 [2024-11-20 21:13:02.272533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.278 [2024-11-20 21:13:02.272539] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:44.278 [2024-11-20 21:13:02.272547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:44.278 [2024-11-20 21:13:02.272556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:44.278 [2024-11-20 21:13:02.272564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:44.278 [2024-11-20 21:13:02.272572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:44.278 [2024-11-20 21:13:02.272582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:44.278 [2024-11-20 21:13:02.272589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:44.278 [2024-11-20 21:13:02.272595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:44.278 [2024-11-20 21:13:02.272602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:44.278 [2024-11-20 21:13:02.272608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:44.278 [2024-11-20 21:13:02.272617] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:44.278 [2024-11-20 21:13:02.272629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:44.278 [2024-11-20 21:13:02.272638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:44.278 [2024-11-20 21:13:02.272645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:44.278 [2024-11-20 21:13:02.272653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:44.278 [2024-11-20 21:13:02.272663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:44.278 [2024-11-20 21:13:02.272670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:44.278 [2024-11-20 21:13:02.272677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:44.278 [2024-11-20 21:13:02.272684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:44.278 [2024-11-20 21:13:02.272691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:44.278 [2024-11-20 21:13:02.272698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:44.278 [2024-11-20 21:13:02.272705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:44.278 [2024-11-20 21:13:02.272712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:44.278 [2024-11-20 21:13:02.272725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:44.278 [2024-11-20 21:13:02.272732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:44.278 [2024-11-20 21:13:02.272739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:44.278 [2024-11-20 21:13:02.273118] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:44.278 [2024-11-20 21:13:02.273188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:44.278 [2024-11-20 21:13:02.273220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:44.278 [2024-11-20 21:13:02.273250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:44.278 [2024-11-20 21:13:02.273347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:44.278 [2024-11-20 21:13:02.273386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:44.278 [2024-11-20 21:13:02.273418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.273438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:44.278 [2024-11-20 21:13:02.273458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:30:44.278 [2024-11-20 21:13:02.273508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.286317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.286476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:44.278 [2024-11-20 21:13:02.286532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.703 ms 00:30:44.278 [2024-11-20 21:13:02.286554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.286655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.286676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:44.278 [2024-11-20 21:13:02.286695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:44.278 [2024-11-20 21:13:02.286713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.306924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.307098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:44.278 [2024-11-20 21:13:02.307159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.117 ms 00:30:44.278 [2024-11-20 21:13:02.307194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.307253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.307276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:44.278 [2024-11-20 21:13:02.307297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:44.278 [2024-11-20 21:13:02.307322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.307838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.307900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:44.278 [2024-11-20 21:13:02.308046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:30:44.278 [2024-11-20 21:13:02.308069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.308222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.308254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:44.278 [2024-11-20 21:13:02.308337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:30:44.278 [2024-11-20 21:13:02.308349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.315655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.315708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:44.278 [2024-11-20 21:13:02.315724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.281 ms 00:30:44.278 [2024-11-20 21:13:02.315733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.319205] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:44.278 [2024-11-20 21:13:02.319257] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:44.278 [2024-11-20 21:13:02.319270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.319279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:44.278 [2024-11-20 21:13:02.319289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.430 ms 00:30:44.278 [2024-11-20 21:13:02.319296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.334884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.334930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:44.278 [2024-11-20 21:13:02.334944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.532 ms 00:30:44.278 [2024-11-20 21:13:02.334952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.338009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.338166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:44.278 [2024-11-20 21:13:02.338184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.008 ms 00:30:44.278 [2024-11-20 21:13:02.338191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.340715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.340779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:44.278 [2024-11-20 21:13:02.340789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.487 ms 00:30:44.278 [2024-11-20 21:13:02.340797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.341141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.341154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:44.278 [2024-11-20 21:13:02.341163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:30:44.278 [2024-11-20 21:13:02.341171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.363525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.278 [2024-11-20 21:13:02.363590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:44.278 [2024-11-20 21:13:02.363603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.337 ms 00:30:44.278 [2024-11-20 21:13:02.363612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.278 [2024-11-20 21:13:02.371391] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:44.279 [2024-11-20 21:13:02.374265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.279 [2024-11-20 21:13:02.374303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:44.279 [2024-11-20 21:13:02.374322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.605 ms 00:30:44.279 [2024-11-20 21:13:02.374333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.279 [2024-11-20 21:13:02.374406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.279 [2024-11-20 21:13:02.374418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:44.279 [2024-11-20 21:13:02.374427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:44.279 [2024-11-20 21:13:02.374435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.279 [2024-11-20 21:13:02.374503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.279 [2024-11-20 21:13:02.374520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:44.279 [2024-11-20 21:13:02.374529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:44.279 [2024-11-20 21:13:02.374536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.279 [2024-11-20 21:13:02.374562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.279 [2024-11-20 21:13:02.374571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:44.279 [2024-11-20 21:13:02.374581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:44.279 [2024-11-20 21:13:02.374589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.279 [2024-11-20 21:13:02.374624] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:44.279 [2024-11-20 21:13:02.374634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.279 [2024-11-20 21:13:02.374646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:44.279 [2024-11-20 21:13:02.374658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:44.279 [2024-11-20 21:13:02.374665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.279 [2024-11-20 21:13:02.379579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.279 [2024-11-20 21:13:02.379625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:44.279 [2024-11-20 21:13:02.379636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.893 ms 00:30:44.279 [2024-11-20 21:13:02.379644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.279 [2024-11-20 21:13:02.379730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:44.279 [2024-11-20 21:13:02.379740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:44.279 [2024-11-20 21:13:02.379784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:30:44.279 [2024-11-20 21:13:02.379792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:44.279 [2024-11-20 21:13:02.380811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 126.172 ms, result 0 00:30:45.665  [2024-11-20T21:13:04.727Z] Copying: 10000/1048576 [kB] (10000 kBps) [2024-11-20T21:13:05.672Z] Copying: 23/1024 [MB] (14 MBps) [2024-11-20T21:13:06.618Z] Copying: 42/1024 [MB] (18 MBps) [2024-11-20T21:13:07.562Z] Copying: 53/1024 [MB] (11 MBps) [2024-11-20T21:13:08.507Z] Copying: 66/1024 [MB] (13 MBps) [2024-11-20T21:13:09.468Z] Copying: 79/1024 [MB] (13 MBps) [2024-11-20T21:13:10.412Z] Copying: 105/1024 [MB] (25 MBps) [2024-11-20T21:13:11.876Z] Copying: 126/1024 [MB] (20 MBps) [2024-11-20T21:13:12.462Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-20T21:13:13.404Z] Copying: 154/1024 [MB] (18 MBps) [2024-11-20T21:13:14.790Z] Copying: 165/1024 [MB] (10 MBps) [2024-11-20T21:13:15.731Z] Copying: 180/1024 [MB] (14 MBps) [2024-11-20T21:13:16.673Z] Copying: 196/1024 [MB] (15 MBps) [2024-11-20T21:13:17.617Z] Copying: 214/1024 [MB] (18 MBps) [2024-11-20T21:13:18.562Z] Copying: 228/1024 [MB] (14 MBps) [2024-11-20T21:13:19.504Z] Copying: 244/1024 [MB] (15 MBps) [2024-11-20T21:13:20.446Z] Copying: 260/1024 [MB] (16 MBps) [2024-11-20T21:13:21.833Z] Copying: 290/1024 [MB] (29 MBps) [2024-11-20T21:13:22.407Z] Copying: 300/1024 [MB] (10 MBps) [2024-11-20T21:13:23.796Z] Copying: 314/1024 [MB] (14 MBps) [2024-11-20T21:13:24.741Z] Copying: 325/1024 [MB] (10 MBps) [2024-11-20T21:13:25.686Z] Copying: 336/1024 [MB] (11 MBps) [2024-11-20T21:13:26.631Z] Copying: 347/1024 [MB] (11 MBps) [2024-11-20T21:13:27.573Z] Copying: 370/1024 [MB] (23 MBps) [2024-11-20T21:13:28.514Z] Copying: 392/1024 [MB] (21 MBps) [2024-11-20T21:13:29.458Z] Copying: 404/1024 [MB] (12 MBps) [2024-11-20T21:13:30.400Z] Copying: 422/1024 [MB] (17 MBps) [2024-11-20T21:13:31.786Z] Copying: 437/1024 [MB] (15 MBps) [2024-11-20T21:13:32.730Z] Copying: 459/1024 [MB] (21 MBps) [2024-11-20T21:13:33.675Z] Copying: 481/1024 [MB] (22 MBps) [2024-11-20T21:13:34.619Z] Copying: 503/1024 [MB] (21 MBps) [2024-11-20T21:13:35.564Z] Copying: 523/1024 [MB] (19 MBps) [2024-11-20T21:13:36.507Z] Copying: 543/1024 [MB] (20 MBps) [2024-11-20T21:13:37.452Z] Copying: 555/1024 [MB] (11 MBps) [2024-11-20T21:13:38.836Z] Copying: 576/1024 [MB] (21 MBps) [2024-11-20T21:13:39.409Z] Copying: 606/1024 [MB] (29 MBps) [2024-11-20T21:13:40.796Z] Copying: 628/1024 [MB] (21 MBps) [2024-11-20T21:13:41.740Z] Copying: 651/1024 [MB] (23 MBps) [2024-11-20T21:13:42.681Z] Copying: 663/1024 [MB] (12 MBps) [2024-11-20T21:13:43.677Z] Copying: 689/1024 [MB] (25 MBps) [2024-11-20T21:13:44.640Z] Copying: 708/1024 [MB] (18 MBps) [2024-11-20T21:13:45.584Z] Copying: 721/1024 [MB] (13 MBps) [2024-11-20T21:13:46.527Z] Copying: 744/1024 [MB] (22 MBps) [2024-11-20T21:13:47.470Z] Copying: 768/1024 [MB] (23 MBps) [2024-11-20T21:13:48.414Z] Copying: 788/1024 [MB] (20 MBps) [2024-11-20T21:13:49.814Z] Copying: 808/1024 [MB] (20 MBps) [2024-11-20T21:13:50.756Z] Copying: 835/1024 [MB] (26 MBps) [2024-11-20T21:13:51.700Z] Copying: 858/1024 [MB] (23 MBps) [2024-11-20T21:13:52.644Z] Copying: 888/1024 [MB] (29 MBps) [2024-11-20T21:13:53.588Z] Copying: 913/1024 [MB] (25 MBps) [2024-11-20T21:13:54.533Z] Copying: 931/1024 [MB] (17 MBps) [2024-11-20T21:13:55.478Z] Copying: 942/1024 [MB] (11 MBps) [2024-11-20T21:13:56.422Z] Copying: 960/1024 [MB] (17 MBps) [2024-11-20T21:13:57.812Z] Copying: 983/1024 [MB] (22 MBps) [2024-11-20T21:13:58.387Z] Copying: 993/1024 [MB] (10 MBps) [2024-11-20T21:13:58.387Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-20 21:13:58.225578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-20 21:13:58.225621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:40.268 [2024-11-20 21:13:58.225635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:40.268 [2024-11-20 21:13:58.225649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-20 21:13:58.225672] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:40.268 [2024-11-20 21:13:58.226135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-20 21:13:58.226153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:40.268 [2024-11-20 21:13:58.226163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:31:40.268 [2024-11-20 21:13:58.226170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-20 21:13:58.228095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-20 21:13:58.228131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:40.268 [2024-11-20 21:13:58.228144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.905 ms 00:31:40.268 [2024-11-20 21:13:58.228152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-20 21:13:58.228184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-20 21:13:58.228193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:40.268 [2024-11-20 21:13:58.228201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:40.268 [2024-11-20 21:13:58.228208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-20 21:13:58.228250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.268 [2024-11-20 21:13:58.228258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:40.268 [2024-11-20 21:13:58.228266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:40.268 [2024-11-20 21:13:58.228273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.268 [2024-11-20 21:13:58.228286] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:40.268 [2024-11-20 21:13:58.228296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:40.268 [2024-11-20 21:13:58.228414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:40.269 [2024-11-20 21:13:58.228969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:40.270 [2024-11-20 21:13:58.228976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:40.270 [2024-11-20 21:13:58.228983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:40.270 [2024-11-20 21:13:58.228990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:40.270 [2024-11-20 21:13:58.228997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:40.270 [2024-11-20 21:13:58.229004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:40.270 [2024-11-20 21:13:58.229012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:40.270 [2024-11-20 21:13:58.229020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:40.270 [2024-11-20 21:13:58.229027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:40.270 [2024-11-20 21:13:58.229034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:40.270 [2024-11-20 21:13:58.229049] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:40.270 [2024-11-20 21:13:58.229058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f11f99b-a93a-40e6-9e26-852bcbc4a86c 00:31:40.270 [2024-11-20 21:13:58.229066] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:40.270 [2024-11-20 21:13:58.229073] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:40.270 [2024-11-20 21:13:58.229080] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:40.270 [2024-11-20 21:13:58.229090] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:40.270 [2024-11-20 21:13:58.229098] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:40.270 [2024-11-20 21:13:58.229106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:40.270 [2024-11-20 21:13:58.229113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:40.270 [2024-11-20 21:13:58.229119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:40.270 [2024-11-20 21:13:58.229125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:40.270 [2024-11-20 21:13:58.229132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.270 [2024-11-20 21:13:58.229139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:40.270 [2024-11-20 21:13:58.229147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:31:40.270 [2024-11-20 21:13:58.229154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.230516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.270 [2024-11-20 21:13:58.230533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:40.270 [2024-11-20 21:13:58.230546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:31:40.270 [2024-11-20 21:13:58.230554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.230635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:40.270 [2024-11-20 21:13:58.230643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:40.270 [2024-11-20 21:13:58.230654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:40.270 [2024-11-20 21:13:58.230661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.235572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.235680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:40.270 [2024-11-20 21:13:58.235732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.235766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.235830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.235851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:40.270 [2024-11-20 21:13:58.235874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.235893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.235935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.235994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:40.270 [2024-11-20 21:13:58.236015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.236040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.236065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.236085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:40.270 [2024-11-20 21:13:58.236104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.236125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.244783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.244909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:40.270 [2024-11-20 21:13:58.244965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.244986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.251913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.252030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:40.270 [2024-11-20 21:13:58.252077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.252103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.252162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.252185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:40.270 [2024-11-20 21:13:58.252210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.252228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.252280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.252302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:40.270 [2024-11-20 21:13:58.252369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.252392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.252455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.252482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:40.270 [2024-11-20 21:13:58.252501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.252520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.252559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.252581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:40.270 [2024-11-20 21:13:58.252601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.252654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.252710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.252732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:40.270 [2024-11-20 21:13:58.252818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.252839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.252893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:40.270 [2024-11-20 21:13:58.252921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:40.270 [2024-11-20 21:13:58.252941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:40.270 [2024-11-20 21:13:58.252960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:40.270 [2024-11-20 21:13:58.253086] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.474 ms, result 0 00:31:40.843 00:31:40.843 00:31:40.843 21:13:58 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:31:40.843 [2024-11-20 21:13:58.736379] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:31:40.843 [2024-11-20 21:13:58.736714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95529 ] 00:31:40.843 [2024-11-20 21:13:58.883487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.843 [2024-11-20 21:13:58.912152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:41.107 [2024-11-20 21:13:59.026176] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:41.107 [2024-11-20 21:13:59.026527] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:41.107 [2024-11-20 21:13:59.187342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.107 [2024-11-20 21:13:59.187558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:41.107 [2024-11-20 21:13:59.187582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:41.107 [2024-11-20 21:13:59.187592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.107 [2024-11-20 21:13:59.187667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.107 [2024-11-20 21:13:59.187678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:41.107 [2024-11-20 21:13:59.187693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:31:41.107 [2024-11-20 21:13:59.187706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.107 [2024-11-20 21:13:59.187731] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:41.107 [2024-11-20 21:13:59.188021] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:41.107 [2024-11-20 21:13:59.188041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.107 [2024-11-20 21:13:59.188051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:41.107 [2024-11-20 21:13:59.188060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:31:41.107 [2024-11-20 21:13:59.188071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.107 [2024-11-20 21:13:59.188346] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:41.107 [2024-11-20 21:13:59.188377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.107 [2024-11-20 21:13:59.188387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:41.107 [2024-11-20 21:13:59.188398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:41.107 [2024-11-20 21:13:59.188410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.107 [2024-11-20 21:13:59.188468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.107 [2024-11-20 21:13:59.188481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:41.107 [2024-11-20 21:13:59.188489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:41.107 [2024-11-20 21:13:59.188498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.107 [2024-11-20 21:13:59.188838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.107 [2024-11-20 21:13:59.188855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:41.107 [2024-11-20 21:13:59.188868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:31:41.107 [2024-11-20 21:13:59.188875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.107 [2024-11-20 21:13:59.188960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.107 [2024-11-20 21:13:59.188971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:41.107 [2024-11-20 21:13:59.188979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:41.107 [2024-11-20 21:13:59.188986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.107 [2024-11-20 21:13:59.189013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.107 [2024-11-20 21:13:59.189023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:41.107 [2024-11-20 21:13:59.189036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:41.107 [2024-11-20 21:13:59.189047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.107 [2024-11-20 21:13:59.189074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:41.107 [2024-11-20 21:13:59.191289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.107 [2024-11-20 21:13:59.191328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:41.107 [2024-11-20 21:13:59.191339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.214 ms 00:31:41.107 [2024-11-20 21:13:59.191347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.107 [2024-11-20 21:13:59.191381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.107 [2024-11-20 21:13:59.191389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:41.107 [2024-11-20 21:13:59.191397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:41.107 [2024-11-20 21:13:59.191410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.107 [2024-11-20 21:13:59.191464] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:41.107 [2024-11-20 21:13:59.191488] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:41.107 [2024-11-20 21:13:59.191535] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:41.107 [2024-11-20 21:13:59.191551] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:41.108 [2024-11-20 21:13:59.191655] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:41.108 [2024-11-20 21:13:59.191665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:41.108 [2024-11-20 21:13:59.191680] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:41.108 [2024-11-20 21:13:59.191690] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:41.108 [2024-11-20 21:13:59.191704] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:41.108 [2024-11-20 21:13:59.191716] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:41.108 [2024-11-20 21:13:59.191723] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:41.108 [2024-11-20 21:13:59.191730] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:41.108 [2024-11-20 21:13:59.191738] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:41.108 [2024-11-20 21:13:59.191774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.108 [2024-11-20 21:13:59.191782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:41.108 [2024-11-20 21:13:59.191790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:31:41.108 [2024-11-20 21:13:59.191801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.108 [2024-11-20 21:13:59.191883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.108 [2024-11-20 21:13:59.191892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:41.108 [2024-11-20 21:13:59.191903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:41.108 [2024-11-20 21:13:59.191909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.108 [2024-11-20 21:13:59.192020] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:41.108 [2024-11-20 21:13:59.192031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:41.108 [2024-11-20 21:13:59.192042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:41.108 [2024-11-20 21:13:59.192053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:41.108 [2024-11-20 21:13:59.192072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:41.108 [2024-11-20 21:13:59.192087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:41.108 [2024-11-20 21:13:59.192095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:41.108 [2024-11-20 21:13:59.192109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:41.108 [2024-11-20 21:13:59.192117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:41.108 [2024-11-20 21:13:59.192124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:41.108 [2024-11-20 21:13:59.192131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:41.108 [2024-11-20 21:13:59.192138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:41.108 [2024-11-20 21:13:59.192145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:41.108 [2024-11-20 21:13:59.192159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:41.108 [2024-11-20 21:13:59.192168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:41.108 [2024-11-20 21:13:59.192182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:41.108 [2024-11-20 21:13:59.192195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:41.108 [2024-11-20 21:13:59.192201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:41.108 [2024-11-20 21:13:59.192214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:41.108 [2024-11-20 21:13:59.192220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:41.108 [2024-11-20 21:13:59.192233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:41.108 [2024-11-20 21:13:59.192240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:41.108 [2024-11-20 21:13:59.192253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:41.108 [2024-11-20 21:13:59.192260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:41.108 [2024-11-20 21:13:59.192277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:41.108 [2024-11-20 21:13:59.192283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:41.108 [2024-11-20 21:13:59.192290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:41.108 [2024-11-20 21:13:59.192296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:41.108 [2024-11-20 21:13:59.192303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:41.108 [2024-11-20 21:13:59.192309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:41.108 [2024-11-20 21:13:59.192322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:41.108 [2024-11-20 21:13:59.192329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192337] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:41.108 [2024-11-20 21:13:59.192345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:41.108 [2024-11-20 21:13:59.192352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:41.108 [2024-11-20 21:13:59.192362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:41.108 [2024-11-20 21:13:59.192370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:41.108 [2024-11-20 21:13:59.192377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:41.108 [2024-11-20 21:13:59.192383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:41.108 [2024-11-20 21:13:59.192392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:41.108 [2024-11-20 21:13:59.192398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:41.108 [2024-11-20 21:13:59.192405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:41.108 [2024-11-20 21:13:59.192413] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:41.108 [2024-11-20 21:13:59.192423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:41.108 [2024-11-20 21:13:59.192432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:41.108 [2024-11-20 21:13:59.192439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:41.108 [2024-11-20 21:13:59.192446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:41.108 [2024-11-20 21:13:59.192453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:41.108 [2024-11-20 21:13:59.192460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:41.108 [2024-11-20 21:13:59.192467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:41.108 [2024-11-20 21:13:59.192474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:41.109 [2024-11-20 21:13:59.192482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:41.109 [2024-11-20 21:13:59.192489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:41.109 [2024-11-20 21:13:59.192496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:41.109 [2024-11-20 21:13:59.192502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:41.109 [2024-11-20 21:13:59.192511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:41.109 [2024-11-20 21:13:59.192518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:41.109 [2024-11-20 21:13:59.192531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:41.109 [2024-11-20 21:13:59.192538] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:41.109 [2024-11-20 21:13:59.192546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:41.109 [2024-11-20 21:13:59.192555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:41.109 [2024-11-20 21:13:59.192563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:41.109 [2024-11-20 21:13:59.192570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:41.109 [2024-11-20 21:13:59.192577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:41.109 [2024-11-20 21:13:59.192587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.109 [2024-11-20 21:13:59.192595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:41.109 [2024-11-20 21:13:59.192602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:31:41.109 [2024-11-20 21:13:59.192613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.109 [2024-11-20 21:13:59.201996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.109 [2024-11-20 21:13:59.202174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:41.109 [2024-11-20 21:13:59.202191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.334 ms 00:31:41.109 [2024-11-20 21:13:59.202200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.109 [2024-11-20 21:13:59.202286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.109 [2024-11-20 21:13:59.202303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:41.109 [2024-11-20 21:13:59.202312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:31:41.109 [2024-11-20 21:13:59.202319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.370 [2024-11-20 21:13:59.223238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.370 [2024-11-20 21:13:59.223300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:41.370 [2024-11-20 21:13:59.223312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.860 ms 00:31:41.370 [2024-11-20 21:13:59.223321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.370 [2024-11-20 21:13:59.223366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.370 [2024-11-20 21:13:59.223376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:41.370 [2024-11-20 21:13:59.223384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:41.370 [2024-11-20 21:13:59.223398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.370 [2024-11-20 21:13:59.223502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.370 [2024-11-20 21:13:59.223517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:41.370 [2024-11-20 21:13:59.223526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:41.370 [2024-11-20 21:13:59.223534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.370 [2024-11-20 21:13:59.223658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.370 [2024-11-20 21:13:59.223670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:41.370 [2024-11-20 21:13:59.223680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:31:41.370 [2024-11-20 21:13:59.223688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.370 [2024-11-20 21:13:59.232002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.232049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:41.371 [2024-11-20 21:13:59.232074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.289 ms 00:31:41.371 [2024-11-20 21:13:59.232087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.232208] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:41.371 [2024-11-20 21:13:59.232224] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:41.371 [2024-11-20 21:13:59.232239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.232253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:41.371 [2024-11-20 21:13:59.232263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:31:41.371 [2024-11-20 21:13:59.232274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.245252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.245293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:41.371 [2024-11-20 21:13:59.245310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.961 ms 00:31:41.371 [2024-11-20 21:13:59.245318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.245446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.245456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:41.371 [2024-11-20 21:13:59.245471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:31:41.371 [2024-11-20 21:13:59.245482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.245529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.245542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:41.371 [2024-11-20 21:13:59.245551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:41.371 [2024-11-20 21:13:59.245559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.245924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.245947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:41.371 [2024-11-20 21:13:59.245959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:31:41.371 [2024-11-20 21:13:59.245967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.245984] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:41.371 [2024-11-20 21:13:59.245994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.246006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:41.371 [2024-11-20 21:13:59.246014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:41.371 [2024-11-20 21:13:59.246023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.255754] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:41.371 [2024-11-20 21:13:59.255911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.255926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:41.371 [2024-11-20 21:13:59.255936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.869 ms 00:31:41.371 [2024-11-20 21:13:59.255944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.258437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.258601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:41.371 [2024-11-20 21:13:59.258626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.468 ms 00:31:41.371 [2024-11-20 21:13:59.258640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.258741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.258792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:41.371 [2024-11-20 21:13:59.258802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:31:41.371 [2024-11-20 21:13:59.258810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.258841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.258850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:41.371 [2024-11-20 21:13:59.258858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:41.371 [2024-11-20 21:13:59.258865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.258906] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:41.371 [2024-11-20 21:13:59.258916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.258924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:41.371 [2024-11-20 21:13:59.258932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:41.371 [2024-11-20 21:13:59.258940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.264952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.265115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:41.371 [2024-11-20 21:13:59.265133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.992 ms 00:31:41.371 [2024-11-20 21:13:59.265141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.265219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:41.371 [2024-11-20 21:13:59.265230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:41.371 [2024-11-20 21:13:59.265238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:41.371 [2024-11-20 21:13:59.265249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:41.371 [2024-11-20 21:13:59.266413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 78.625 ms, result 0 00:31:42.758  [2024-11-20T21:14:01.821Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-20T21:14:02.767Z] Copying: 46/1024 [MB] (24 MBps) [2024-11-20T21:14:03.712Z] Copying: 57/1024 [MB] (11 MBps) [2024-11-20T21:14:04.657Z] Copying: 67/1024 [MB] (10 MBps) [2024-11-20T21:14:05.600Z] Copying: 78/1024 [MB] (10 MBps) [2024-11-20T21:14:06.542Z] Copying: 91/1024 [MB] (12 MBps) [2024-11-20T21:14:07.486Z] Copying: 102/1024 [MB] (10 MBps) [2024-11-20T21:14:08.871Z] Copying: 115/1024 [MB] (13 MBps) [2024-11-20T21:14:09.821Z] Copying: 128/1024 [MB] (13 MBps) [2024-11-20T21:14:10.765Z] Copying: 139/1024 [MB] (10 MBps) [2024-11-20T21:14:11.708Z] Copying: 150/1024 [MB] (10 MBps) [2024-11-20T21:14:12.652Z] Copying: 170/1024 [MB] (19 MBps) [2024-11-20T21:14:13.596Z] Copying: 190/1024 [MB] (20 MBps) [2024-11-20T21:14:14.548Z] Copying: 205/1024 [MB] (15 MBps) [2024-11-20T21:14:15.549Z] Copying: 219/1024 [MB] (13 MBps) [2024-11-20T21:14:16.494Z] Copying: 237/1024 [MB] (17 MBps) [2024-11-20T21:14:17.880Z] Copying: 254/1024 [MB] (17 MBps) [2024-11-20T21:14:18.823Z] Copying: 265/1024 [MB] (11 MBps) [2024-11-20T21:14:19.780Z] Copying: 284/1024 [MB] (18 MBps) [2024-11-20T21:14:20.722Z] Copying: 303/1024 [MB] (19 MBps) [2024-11-20T21:14:21.665Z] Copying: 324/1024 [MB] (20 MBps) [2024-11-20T21:14:22.605Z] Copying: 344/1024 [MB] (20 MBps) [2024-11-20T21:14:23.549Z] Copying: 366/1024 [MB] (22 MBps) [2024-11-20T21:14:24.492Z] Copying: 387/1024 [MB] (21 MBps) [2024-11-20T21:14:25.879Z] Copying: 409/1024 [MB] (21 MBps) [2024-11-20T21:14:26.823Z] Copying: 421/1024 [MB] (12 MBps) [2024-11-20T21:14:27.766Z] Copying: 439/1024 [MB] (17 MBps) [2024-11-20T21:14:28.711Z] Copying: 457/1024 [MB] (18 MBps) [2024-11-20T21:14:29.654Z] Copying: 471/1024 [MB] (14 MBps) [2024-11-20T21:14:30.599Z] Copying: 484/1024 [MB] (12 MBps) [2024-11-20T21:14:31.543Z] Copying: 495/1024 [MB] (11 MBps) [2024-11-20T21:14:32.486Z] Copying: 518/1024 [MB] (23 MBps) [2024-11-20T21:14:33.873Z] Copying: 543/1024 [MB] (24 MBps) [2024-11-20T21:14:34.815Z] Copying: 554/1024 [MB] (10 MBps) [2024-11-20T21:14:35.755Z] Copying: 569/1024 [MB] (15 MBps) [2024-11-20T21:14:36.694Z] Copying: 581/1024 [MB] (11 MBps) [2024-11-20T21:14:37.647Z] Copying: 602/1024 [MB] (21 MBps) [2024-11-20T21:14:38.590Z] Copying: 622/1024 [MB] (19 MBps) [2024-11-20T21:14:39.534Z] Copying: 634/1024 [MB] (11 MBps) [2024-11-20T21:14:40.479Z] Copying: 649/1024 [MB] (15 MBps) [2024-11-20T21:14:41.866Z] Copying: 663/1024 [MB] (13 MBps) [2024-11-20T21:14:42.810Z] Copying: 680/1024 [MB] (17 MBps) [2024-11-20T21:14:43.757Z] Copying: 692/1024 [MB] (12 MBps) [2024-11-20T21:14:44.700Z] Copying: 715/1024 [MB] (22 MBps) [2024-11-20T21:14:45.645Z] Copying: 737/1024 [MB] (21 MBps) [2024-11-20T21:14:46.610Z] Copying: 752/1024 [MB] (15 MBps) [2024-11-20T21:14:47.576Z] Copying: 768/1024 [MB] (15 MBps) [2024-11-20T21:14:48.521Z] Copying: 791/1024 [MB] (22 MBps) [2024-11-20T21:14:49.465Z] Copying: 815/1024 [MB] (24 MBps) [2024-11-20T21:14:50.852Z] Copying: 838/1024 [MB] (22 MBps) [2024-11-20T21:14:51.797Z] Copying: 857/1024 [MB] (18 MBps) [2024-11-20T21:14:52.741Z] Copying: 870/1024 [MB] (13 MBps) [2024-11-20T21:14:53.685Z] Copying: 884/1024 [MB] (13 MBps) [2024-11-20T21:14:54.628Z] Copying: 904/1024 [MB] (20 MBps) [2024-11-20T21:14:55.574Z] Copying: 915/1024 [MB] (11 MBps) [2024-11-20T21:14:56.516Z] Copying: 926/1024 [MB] (10 MBps) [2024-11-20T21:14:57.458Z] Copying: 944/1024 [MB] (18 MBps) [2024-11-20T21:14:58.843Z] Copying: 959/1024 [MB] (14 MBps) [2024-11-20T21:14:59.784Z] Copying: 974/1024 [MB] (15 MBps) [2024-11-20T21:15:00.730Z] Copying: 989/1024 [MB] (14 MBps) [2024-11-20T21:15:01.675Z] Copying: 1007/1024 [MB] (18 MBps) [2024-11-20T21:15:01.675Z] Copying: 1023/1024 [MB] (16 MBps) [2024-11-20T21:15:01.938Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-20 21:15:01.873471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.819 [2024-11-20 21:15:01.873945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:43.819 [2024-11-20 21:15:01.874071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:43.819 [2024-11-20 21:15:01.874113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.819 [2024-11-20 21:15:01.874178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:43.819 [2024-11-20 21:15:01.875043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.819 [2024-11-20 21:15:01.875957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:43.819 [2024-11-20 21:15:01.876101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:32:43.819 [2024-11-20 21:15:01.876133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.819 [2024-11-20 21:15:01.876461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.819 [2024-11-20 21:15:01.876558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:43.819 [2024-11-20 21:15:01.877076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:32:43.819 [2024-11-20 21:15:01.877132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.819 [2024-11-20 21:15:01.877218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.819 [2024-11-20 21:15:01.877319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:43.819 [2024-11-20 21:15:01.877346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:43.819 [2024-11-20 21:15:01.877366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.819 [2024-11-20 21:15:01.877957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.819 [2024-11-20 21:15:01.878102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:43.819 [2024-11-20 21:15:01.878219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:32:43.819 [2024-11-20 21:15:01.878264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.819 [2024-11-20 21:15:01.878301] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:43.819 [2024-11-20 21:15:01.878329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.878367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.878398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.878426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.879136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.879706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.879844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.879875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.879900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.879926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.879953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.879979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:43.819 [2024-11-20 21:15:01.880950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.880976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.881719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.883627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.883689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.883716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.883742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.883798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.883823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.883849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:43.820 [2024-11-20 21:15:01.884638] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:43.820 [2024-11-20 21:15:01.884668] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f11f99b-a93a-40e6-9e26-852bcbc4a86c 00:32:43.820 [2024-11-20 21:15:01.884697] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:43.820 [2024-11-20 21:15:01.884723] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:43.820 [2024-11-20 21:15:01.884776] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:43.820 [2024-11-20 21:15:01.884806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:43.820 [2024-11-20 21:15:01.884856] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:43.820 [2024-11-20 21:15:01.884882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:43.820 [2024-11-20 21:15:01.884907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:43.820 [2024-11-20 21:15:01.884930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:43.820 [2024-11-20 21:15:01.884950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:43.820 [2024-11-20 21:15:01.884985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.820 [2024-11-20 21:15:01.885011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:43.820 [2024-11-20 21:15:01.885042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.682 ms 00:32:43.820 [2024-11-20 21:15:01.885088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.820 [2024-11-20 21:15:01.888567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.820 [2024-11-20 21:15:01.888617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:43.820 [2024-11-20 21:15:01.888630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.372 ms 00:32:43.820 [2024-11-20 21:15:01.888641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.820 [2024-11-20 21:15:01.888865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:43.820 [2024-11-20 21:15:01.888887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:43.820 [2024-11-20 21:15:01.888907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:32:43.820 [2024-11-20 21:15:01.888916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.820 [2024-11-20 21:15:01.898880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:43.820 [2024-11-20 21:15:01.899080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:43.820 [2024-11-20 21:15:01.899100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:43.820 [2024-11-20 21:15:01.899110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.820 [2024-11-20 21:15:01.899214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:43.820 [2024-11-20 21:15:01.899225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:43.821 [2024-11-20 21:15:01.899243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:43.821 [2024-11-20 21:15:01.899252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.821 [2024-11-20 21:15:01.899313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:43.821 [2024-11-20 21:15:01.899326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:43.821 [2024-11-20 21:15:01.899336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:43.821 [2024-11-20 21:15:01.899345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.821 [2024-11-20 21:15:01.899364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:43.821 [2024-11-20 21:15:01.899375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:43.821 [2024-11-20 21:15:01.899383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:43.821 [2024-11-20 21:15:01.899396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:43.821 [2024-11-20 21:15:01.918768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:43.821 [2024-11-20 21:15:01.918822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:43.821 [2024-11-20 21:15:01.918835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:43.821 [2024-11-20 21:15:01.918844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.082 [2024-11-20 21:15:01.934786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.082 [2024-11-20 21:15:01.934840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:44.082 [2024-11-20 21:15:01.934852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.082 [2024-11-20 21:15:01.934872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.082 [2024-11-20 21:15:01.934932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.082 [2024-11-20 21:15:01.934943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:44.082 [2024-11-20 21:15:01.934953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.082 [2024-11-20 21:15:01.934963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.082 [2024-11-20 21:15:01.935002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.082 [2024-11-20 21:15:01.935013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:44.082 [2024-11-20 21:15:01.935022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.082 [2024-11-20 21:15:01.935031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.082 [2024-11-20 21:15:01.935101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.082 [2024-11-20 21:15:01.935113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:44.082 [2024-11-20 21:15:01.935123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.082 [2024-11-20 21:15:01.935138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.082 [2024-11-20 21:15:01.935169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.082 [2024-11-20 21:15:01.935181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:44.082 [2024-11-20 21:15:01.935190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.082 [2024-11-20 21:15:01.935199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-20 21:15:01.935256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.083 [2024-11-20 21:15:01.935268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:44.083 [2024-11-20 21:15:01.935283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.083 [2024-11-20 21:15:01.935293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-20 21:15:01.935350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:44.083 [2024-11-20 21:15:01.935368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:44.083 [2024-11-20 21:15:01.935378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:44.083 [2024-11-20 21:15:01.935389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:44.083 [2024-11-20 21:15:01.935551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 62.039 ms, result 0 00:32:44.344 00:32:44.344 00:32:44.345 21:15:02 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:32:46.892 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:32:46.892 21:15:04 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:32:46.892 [2024-11-20 21:15:04.565431] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:32:46.892 [2024-11-20 21:15:04.565597] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96205 ] 00:32:46.892 [2024-11-20 21:15:04.713690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.892 [2024-11-20 21:15:04.754320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.892 [2024-11-20 21:15:04.863508] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:46.892 [2024-11-20 21:15:04.863584] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:47.155 [2024-11-20 21:15:05.027723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.155 [2024-11-20 21:15:05.027816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:47.155 [2024-11-20 21:15:05.027834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:47.155 [2024-11-20 21:15:05.027844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.155 [2024-11-20 21:15:05.027920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.155 [2024-11-20 21:15:05.027932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:47.155 [2024-11-20 21:15:05.027943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:47.155 [2024-11-20 21:15:05.027951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.155 [2024-11-20 21:15:05.027979] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:47.155 [2024-11-20 21:15:05.028285] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:47.155 [2024-11-20 21:15:05.028308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.155 [2024-11-20 21:15:05.028318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:47.155 [2024-11-20 21:15:05.028328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:32:47.155 [2024-11-20 21:15:05.028344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.155 [2024-11-20 21:15:05.028663] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:47.155 [2024-11-20 21:15:05.028693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.155 [2024-11-20 21:15:05.028703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:47.155 [2024-11-20 21:15:05.028714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:47.155 [2024-11-20 21:15:05.028722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.155 [2024-11-20 21:15:05.028828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.155 [2024-11-20 21:15:05.028843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:47.155 [2024-11-20 21:15:05.028853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:32:47.155 [2024-11-20 21:15:05.028863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.155 [2024-11-20 21:15:05.029216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.155 [2024-11-20 21:15:05.029230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:47.155 [2024-11-20 21:15:05.029240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:32:47.155 [2024-11-20 21:15:05.029249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.155 [2024-11-20 21:15:05.029344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.155 [2024-11-20 21:15:05.029354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:47.155 [2024-11-20 21:15:05.029364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:32:47.155 [2024-11-20 21:15:05.029372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.155 [2024-11-20 21:15:05.029402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.155 [2024-11-20 21:15:05.029416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:47.155 [2024-11-20 21:15:05.029426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:47.155 [2024-11-20 21:15:05.029434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.155 [2024-11-20 21:15:05.029458] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:47.156 [2024-11-20 21:15:05.032378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.156 [2024-11-20 21:15:05.032631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:47.156 [2024-11-20 21:15:05.032653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.922 ms 00:32:47.156 [2024-11-20 21:15:05.032663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.156 [2024-11-20 21:15:05.032708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.156 [2024-11-20 21:15:05.032720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:47.156 [2024-11-20 21:15:05.032730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:47.156 [2024-11-20 21:15:05.032738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.156 [2024-11-20 21:15:05.032818] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:47.156 [2024-11-20 21:15:05.032853] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:47.156 [2024-11-20 21:15:05.032908] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:47.156 [2024-11-20 21:15:05.032925] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:47.156 [2024-11-20 21:15:05.033036] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:47.156 [2024-11-20 21:15:05.033048] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:47.156 [2024-11-20 21:15:05.033064] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:47.156 [2024-11-20 21:15:05.033076] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033089] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033101] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:47.156 [2024-11-20 21:15:05.033114] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:47.156 [2024-11-20 21:15:05.033122] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:47.156 [2024-11-20 21:15:05.033130] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:47.156 [2024-11-20 21:15:05.033138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.156 [2024-11-20 21:15:05.033145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:47.156 [2024-11-20 21:15:05.033154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:32:47.156 [2024-11-20 21:15:05.033162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.156 [2024-11-20 21:15:05.033254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.156 [2024-11-20 21:15:05.033264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:47.156 [2024-11-20 21:15:05.033273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:47.156 [2024-11-20 21:15:05.033290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.156 [2024-11-20 21:15:05.033394] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:47.156 [2024-11-20 21:15:05.033407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:47.156 [2024-11-20 21:15:05.033418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:47.156 [2024-11-20 21:15:05.033447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:47.156 [2024-11-20 21:15:05.033471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:47.156 [2024-11-20 21:15:05.033485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:47.156 [2024-11-20 21:15:05.033495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:47.156 [2024-11-20 21:15:05.033504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:47.156 [2024-11-20 21:15:05.033511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:47.156 [2024-11-20 21:15:05.033519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:47.156 [2024-11-20 21:15:05.033527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:47.156 [2024-11-20 21:15:05.033543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:47.156 [2024-11-20 21:15:05.033567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:47.156 [2024-11-20 21:15:05.033588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:47.156 [2024-11-20 21:15:05.033609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:47.156 [2024-11-20 21:15:05.033629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:47.156 [2024-11-20 21:15:05.033650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:47.156 [2024-11-20 21:15:05.033670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:47.156 [2024-11-20 21:15:05.033676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:47.156 [2024-11-20 21:15:05.033683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:47.156 [2024-11-20 21:15:05.033690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:47.156 [2024-11-20 21:15:05.033697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:47.156 [2024-11-20 21:15:05.033704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:47.156 [2024-11-20 21:15:05.033716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:47.156 [2024-11-20 21:15:05.033723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033734] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:47.156 [2024-11-20 21:15:05.033759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:47.156 [2024-11-20 21:15:05.033768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:47.156 [2024-11-20 21:15:05.033793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:47.156 [2024-11-20 21:15:05.033801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:47.156 [2024-11-20 21:15:05.033809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:47.156 [2024-11-20 21:15:05.033820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:47.156 [2024-11-20 21:15:05.033858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:47.156 [2024-11-20 21:15:05.033866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:47.156 [2024-11-20 21:15:05.033875] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:47.156 [2024-11-20 21:15:05.033886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:47.156 [2024-11-20 21:15:05.033898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:47.156 [2024-11-20 21:15:05.033907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:47.157 [2024-11-20 21:15:05.033918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:47.157 [2024-11-20 21:15:05.033927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:47.157 [2024-11-20 21:15:05.033937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:47.157 [2024-11-20 21:15:05.033954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:47.157 [2024-11-20 21:15:05.033962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:47.157 [2024-11-20 21:15:05.033970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:47.157 [2024-11-20 21:15:05.033978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:47.157 [2024-11-20 21:15:05.033988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:47.157 [2024-11-20 21:15:05.033997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:47.157 [2024-11-20 21:15:05.034009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:47.157 [2024-11-20 21:15:05.034017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:47.157 [2024-11-20 21:15:05.034030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:47.157 [2024-11-20 21:15:05.034039] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:47.157 [2024-11-20 21:15:05.034048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:47.157 [2024-11-20 21:15:05.034057] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:47.157 [2024-11-20 21:15:05.034066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:47.157 [2024-11-20 21:15:05.034075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:47.157 [2024-11-20 21:15:05.034083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:47.157 [2024-11-20 21:15:05.034093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.034101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:47.157 [2024-11-20 21:15:05.034109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:32:47.157 [2024-11-20 21:15:05.034117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.048471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.048526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:47.157 [2024-11-20 21:15:05.048542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.295 ms 00:32:47.157 [2024-11-20 21:15:05.048558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.048655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.048665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:47.157 [2024-11-20 21:15:05.048679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:32:47.157 [2024-11-20 21:15:05.048688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.073737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.073864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:47.157 [2024-11-20 21:15:05.073885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.959 ms 00:32:47.157 [2024-11-20 21:15:05.073899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.073963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.073980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:47.157 [2024-11-20 21:15:05.073994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:47.157 [2024-11-20 21:15:05.074008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.074162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.074179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:47.157 [2024-11-20 21:15:05.074200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:32:47.157 [2024-11-20 21:15:05.074212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.074400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.074418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:47.157 [2024-11-20 21:15:05.074431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:32:47.157 [2024-11-20 21:15:05.074444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.086292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.086346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:47.157 [2024-11-20 21:15:05.086358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.818 ms 00:32:47.157 [2024-11-20 21:15:05.086376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.086529] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:47.157 [2024-11-20 21:15:05.086543] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:47.157 [2024-11-20 21:15:05.086559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.086568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:47.157 [2024-11-20 21:15:05.086577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:47.157 [2024-11-20 21:15:05.086586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.099083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.099144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:47.157 [2024-11-20 21:15:05.099162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.475 ms 00:32:47.157 [2024-11-20 21:15:05.099170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.099313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.099323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:47.157 [2024-11-20 21:15:05.099339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:32:47.157 [2024-11-20 21:15:05.099347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.099409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.099419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:47.157 [2024-11-20 21:15:05.099433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:47.157 [2024-11-20 21:15:05.099442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.099813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.099833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:47.157 [2024-11-20 21:15:05.099842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:32:47.157 [2024-11-20 21:15:05.099856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.099877] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:47.157 [2024-11-20 21:15:05.099889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.157 [2024-11-20 21:15:05.099898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:47.157 [2024-11-20 21:15:05.099911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:47.157 [2024-11-20 21:15:05.099919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.157 [2024-11-20 21:15:05.110730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:47.157 [2024-11-20 21:15:05.110922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.158 [2024-11-20 21:15:05.110934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:47.158 [2024-11-20 21:15:05.110945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.981 ms 00:32:47.158 [2024-11-20 21:15:05.110954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.158 [2024-11-20 21:15:05.113418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.158 [2024-11-20 21:15:05.113456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:47.158 [2024-11-20 21:15:05.113470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:32:47.158 [2024-11-20 21:15:05.113479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.158 [2024-11-20 21:15:05.113585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.158 [2024-11-20 21:15:05.113595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:47.158 [2024-11-20 21:15:05.113606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:47.158 [2024-11-20 21:15:05.113614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.158 [2024-11-20 21:15:05.113650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.158 [2024-11-20 21:15:05.113660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:47.158 [2024-11-20 21:15:05.113669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:47.158 [2024-11-20 21:15:05.113679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.158 [2024-11-20 21:15:05.113725] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:47.158 [2024-11-20 21:15:05.113736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.158 [2024-11-20 21:15:05.113764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:47.158 [2024-11-20 21:15:05.113775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:47.158 [2024-11-20 21:15:05.113783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.158 [2024-11-20 21:15:05.121365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.158 [2024-11-20 21:15:05.121587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:47.158 [2024-11-20 21:15:05.121609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.562 ms 00:32:47.158 [2024-11-20 21:15:05.121629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.158 [2024-11-20 21:15:05.122062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:47.158 [2024-11-20 21:15:05.122102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:47.158 [2024-11-20 21:15:05.122120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:32:47.158 [2024-11-20 21:15:05.122129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:47.158 [2024-11-20 21:15:05.123632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 95.401 ms, result 0 00:32:48.099  [2024-11-20T21:15:07.161Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-20T21:15:08.546Z] Copying: 33/1024 [MB] (21 MBps) [2024-11-20T21:15:09.490Z] Copying: 43/1024 [MB] (10 MBps) [2024-11-20T21:15:10.435Z] Copying: 55/1024 [MB] (11 MBps) [2024-11-20T21:15:11.378Z] Copying: 77/1024 [MB] (21 MBps) [2024-11-20T21:15:12.324Z] Copying: 100/1024 [MB] (22 MBps) [2024-11-20T21:15:13.267Z] Copying: 119/1024 [MB] (19 MBps) [2024-11-20T21:15:14.210Z] Copying: 131/1024 [MB] (12 MBps) [2024-11-20T21:15:15.153Z] Copying: 145/1024 [MB] (13 MBps) [2024-11-20T21:15:16.543Z] Copying: 157/1024 [MB] (11 MBps) [2024-11-20T21:15:17.488Z] Copying: 172/1024 [MB] (15 MBps) [2024-11-20T21:15:18.465Z] Copying: 185/1024 [MB] (13 MBps) [2024-11-20T21:15:19.420Z] Copying: 201/1024 [MB] (16 MBps) [2024-11-20T21:15:20.358Z] Copying: 228/1024 [MB] (27 MBps) [2024-11-20T21:15:21.295Z] Copying: 242/1024 [MB] (13 MBps) [2024-11-20T21:15:22.238Z] Copying: 253/1024 [MB] (11 MBps) [2024-11-20T21:15:23.174Z] Copying: 266/1024 [MB] (12 MBps) [2024-11-20T21:15:24.556Z] Copying: 278/1024 [MB] (12 MBps) [2024-11-20T21:15:25.494Z] Copying: 290/1024 [MB] (11 MBps) [2024-11-20T21:15:26.435Z] Copying: 300/1024 [MB] (10 MBps) [2024-11-20T21:15:27.377Z] Copying: 316/1024 [MB] (16 MBps) [2024-11-20T21:15:28.320Z] Copying: 331/1024 [MB] (14 MBps) [2024-11-20T21:15:29.263Z] Copying: 342/1024 [MB] (10 MBps) [2024-11-20T21:15:30.196Z] Copying: 352/1024 [MB] (10 MBps) [2024-11-20T21:15:31.580Z] Copying: 367/1024 [MB] (14 MBps) [2024-11-20T21:15:32.150Z] Copying: 378/1024 [MB] (11 MBps) [2024-11-20T21:15:33.525Z] Copying: 391/1024 [MB] (12 MBps) [2024-11-20T21:15:34.458Z] Copying: 404/1024 [MB] (12 MBps) [2024-11-20T21:15:35.391Z] Copying: 417/1024 [MB] (13 MBps) [2024-11-20T21:15:36.324Z] Copying: 431/1024 [MB] (13 MBps) [2024-11-20T21:15:37.258Z] Copying: 459/1024 [MB] (28 MBps) [2024-11-20T21:15:38.192Z] Copying: 474/1024 [MB] (15 MBps) [2024-11-20T21:15:39.570Z] Copying: 500/1024 [MB] (25 MBps) [2024-11-20T21:15:40.144Z] Copying: 513/1024 [MB] (13 MBps) [2024-11-20T21:15:41.532Z] Copying: 525/1024 [MB] (12 MBps) [2024-11-20T21:15:42.476Z] Copying: 535/1024 [MB] (10 MBps) [2024-11-20T21:15:43.417Z] Copying: 545/1024 [MB] (10 MBps) [2024-11-20T21:15:44.359Z] Copying: 562/1024 [MB] (16 MBps) [2024-11-20T21:15:45.300Z] Copying: 586148/1048576 [kB] (10224 kBps) [2024-11-20T21:15:46.244Z] Copying: 586/1024 [MB] (13 MBps) [2024-11-20T21:15:47.188Z] Copying: 600/1024 [MB] (14 MBps) [2024-11-20T21:15:48.567Z] Copying: 614/1024 [MB] (13 MBps) [2024-11-20T21:15:49.501Z] Copying: 626/1024 [MB] (11 MBps) [2024-11-20T21:15:50.484Z] Copying: 640/1024 [MB] (13 MBps) [2024-11-20T21:15:51.472Z] Copying: 653/1024 [MB] (13 MBps) [2024-11-20T21:15:52.414Z] Copying: 665/1024 [MB] (12 MBps) [2024-11-20T21:15:53.360Z] Copying: 676/1024 [MB] (10 MBps) [2024-11-20T21:15:54.299Z] Copying: 687/1024 [MB] (11 MBps) [2024-11-20T21:15:55.233Z] Copying: 698/1024 [MB] (10 MBps) [2024-11-20T21:15:56.174Z] Copying: 711/1024 [MB] (12 MBps) [2024-11-20T21:15:57.548Z] Copying: 722/1024 [MB] (10 MBps) [2024-11-20T21:15:58.483Z] Copying: 735/1024 [MB] (13 MBps) [2024-11-20T21:15:59.416Z] Copying: 748/1024 [MB] (13 MBps) [2024-11-20T21:16:00.352Z] Copying: 762/1024 [MB] (13 MBps) [2024-11-20T21:16:01.295Z] Copying: 775/1024 [MB] (13 MBps) [2024-11-20T21:16:02.239Z] Copying: 785/1024 [MB] (10 MBps) [2024-11-20T21:16:03.184Z] Copying: 814432/1048576 [kB] (10168 kBps) [2024-11-20T21:16:04.572Z] Copying: 824644/1048576 [kB] (10212 kBps) [2024-11-20T21:16:05.146Z] Copying: 815/1024 [MB] (10 MBps) [2024-11-20T21:16:06.533Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-20T21:16:07.470Z] Copying: 835/1024 [MB] (10 MBps) [2024-11-20T21:16:08.405Z] Copying: 846/1024 [MB] (10 MBps) [2024-11-20T21:16:09.341Z] Copying: 858/1024 [MB] (11 MBps) [2024-11-20T21:16:10.279Z] Copying: 870/1024 [MB] (12 MBps) [2024-11-20T21:16:11.220Z] Copying: 883/1024 [MB] (12 MBps) [2024-11-20T21:16:12.159Z] Copying: 894/1024 [MB] (11 MBps) [2024-11-20T21:16:13.540Z] Copying: 905/1024 [MB] (10 MBps) [2024-11-20T21:16:14.475Z] Copying: 917/1024 [MB] (11 MBps) [2024-11-20T21:16:15.413Z] Copying: 927/1024 [MB] (10 MBps) [2024-11-20T21:16:16.353Z] Copying: 939/1024 [MB] (11 MBps) [2024-11-20T21:16:17.285Z] Copying: 955/1024 [MB] (15 MBps) [2024-11-20T21:16:18.226Z] Copying: 969/1024 [MB] (13 MBps) [2024-11-20T21:16:19.167Z] Copying: 986/1024 [MB] (17 MBps) [2024-11-20T21:16:20.556Z] Copying: 998/1024 [MB] (12 MBps) [2024-11-20T21:16:21.498Z] Copying: 1010/1024 [MB] (11 MBps) [2024-11-20T21:16:22.517Z] Copying: 1020/1024 [MB] (10 MBps) [2024-11-20T21:16:22.517Z] Copying: 1048468/1048576 [kB] (3840 kBps) [2024-11-20T21:16:22.518Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-20 21:16:22.284684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.399 [2024-11-20 21:16:22.284791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:04.399 [2024-11-20 21:16:22.284822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:04.399 [2024-11-20 21:16:22.284835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.399 [2024-11-20 21:16:22.287137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:04.399 [2024-11-20 21:16:22.291067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.399 [2024-11-20 21:16:22.291129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:04.399 [2024-11-20 21:16:22.291145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.439 ms 00:34:04.399 [2024-11-20 21:16:22.291156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.399 [2024-11-20 21:16:22.302453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.399 [2024-11-20 21:16:22.302669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:04.399 [2024-11-20 21:16:22.302694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.032 ms 00:34:04.399 [2024-11-20 21:16:22.302705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.399 [2024-11-20 21:16:22.302782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.399 [2024-11-20 21:16:22.302794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:04.399 [2024-11-20 21:16:22.302805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:04.399 [2024-11-20 21:16:22.302823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.399 [2024-11-20 21:16:22.302895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.399 [2024-11-20 21:16:22.302911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:04.399 [2024-11-20 21:16:22.302927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:04.399 [2024-11-20 21:16:22.302935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.399 [2024-11-20 21:16:22.302951] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:04.399 [2024-11-20 21:16:22.302966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128000 / 261120 wr_cnt: 1 state: open 00:34:04.399 [2024-11-20 21:16:22.302977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.302985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.302994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:04.399 [2024-11-20 21:16:22.303554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:04.400 [2024-11-20 21:16:22.303848] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:04.400 [2024-11-20 21:16:22.303867] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f11f99b-a93a-40e6-9e26-852bcbc4a86c 00:34:04.400 [2024-11-20 21:16:22.303880] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128000 00:34:04.400 [2024-11-20 21:16:22.303888] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128032 00:34:04.400 [2024-11-20 21:16:22.303897] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128000 00:34:04.400 [2024-11-20 21:16:22.303906] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:04.400 [2024-11-20 21:16:22.303914] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:04.400 [2024-11-20 21:16:22.303926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:04.400 [2024-11-20 21:16:22.303934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:04.400 [2024-11-20 21:16:22.303943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:04.400 [2024-11-20 21:16:22.303950] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:04.400 [2024-11-20 21:16:22.303958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.400 [2024-11-20 21:16:22.303966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:04.400 [2024-11-20 21:16:22.303974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:34:04.400 [2024-11-20 21:16:22.303983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.307093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.400 [2024-11-20 21:16:22.307128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:04.400 [2024-11-20 21:16:22.307140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:34:04.400 [2024-11-20 21:16:22.307156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.307308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:04.400 [2024-11-20 21:16:22.307319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:04.400 [2024-11-20 21:16:22.307328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:34:04.400 [2024-11-20 21:16:22.307345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.317159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.317337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:04.400 [2024-11-20 21:16:22.317364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.317373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.317437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.317447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:04.400 [2024-11-20 21:16:22.317457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.317465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.317524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.317536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:04.400 [2024-11-20 21:16:22.317545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.317558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.317579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.317588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:04.400 [2024-11-20 21:16:22.317598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.317608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.336881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.336933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:04.400 [2024-11-20 21:16:22.336953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.336961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.352597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.352868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:04.400 [2024-11-20 21:16:22.352890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.352902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.353026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.353039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:04.400 [2024-11-20 21:16:22.353050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.353059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.353106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.353119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:04.400 [2024-11-20 21:16:22.353129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.353139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.353213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.353226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:04.400 [2024-11-20 21:16:22.353236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.353246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.353278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.353290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:04.400 [2024-11-20 21:16:22.353301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.353311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.353373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.353390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:04.400 [2024-11-20 21:16:22.353402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.353412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.353479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:04.400 [2024-11-20 21:16:22.353493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:04.400 [2024-11-20 21:16:22.353505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:04.400 [2024-11-20 21:16:22.353515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:04.400 [2024-11-20 21:16:22.353689] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 70.244 ms, result 0 00:34:05.346 00:34:05.346 00:34:05.346 21:16:23 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:05.346 [2024-11-20 21:16:23.453505] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:34:05.346 [2024-11-20 21:16:23.453650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96995 ] 00:34:05.607 [2024-11-20 21:16:23.603673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.607 [2024-11-20 21:16:23.642693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.869 [2024-11-20 21:16:23.790020] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:05.869 [2024-11-20 21:16:23.790402] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:05.869 [2024-11-20 21:16:23.953430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.953503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:05.870 [2024-11-20 21:16:23.953523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:34:05.870 [2024-11-20 21:16:23.953533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.953606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.953622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:05.870 [2024-11-20 21:16:23.953632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:34:05.870 [2024-11-20 21:16:23.953641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.953668] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:05.870 [2024-11-20 21:16:23.954021] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:05.870 [2024-11-20 21:16:23.954045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.954054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:05.870 [2024-11-20 21:16:23.954065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:34:05.870 [2024-11-20 21:16:23.954078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.954427] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:05.870 [2024-11-20 21:16:23.954469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.954480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:05.870 [2024-11-20 21:16:23.954493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:34:05.870 [2024-11-20 21:16:23.954511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.954589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.954606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:05.870 [2024-11-20 21:16:23.954620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:34:05.870 [2024-11-20 21:16:23.954629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.954927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.954946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:05.870 [2024-11-20 21:16:23.954956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:34:05.870 [2024-11-20 21:16:23.954973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.955120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.955143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:05.870 [2024-11-20 21:16:23.955157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:34:05.870 [2024-11-20 21:16:23.955165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.955196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.955206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:05.870 [2024-11-20 21:16:23.955215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:05.870 [2024-11-20 21:16:23.955224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.955248] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:05.870 [2024-11-20 21:16:23.958316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.958505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:05.870 [2024-11-20 21:16:23.958580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.074 ms 00:34:05.870 [2024-11-20 21:16:23.958607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.958678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.958710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:05.870 [2024-11-20 21:16:23.958733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:34:05.870 [2024-11-20 21:16:23.958786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.958864] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:05.870 [2024-11-20 21:16:23.959060] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:05.870 [2024-11-20 21:16:23.959140] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:05.870 [2024-11-20 21:16:23.959189] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:05.870 [2024-11-20 21:16:23.959327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:05.870 [2024-11-20 21:16:23.959370] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:05.870 [2024-11-20 21:16:23.959403] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:05.870 [2024-11-20 21:16:23.959502] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:05.870 [2024-11-20 21:16:23.959544] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:05.870 [2024-11-20 21:16:23.959579] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:05.870 [2024-11-20 21:16:23.959599] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:05.870 [2024-11-20 21:16:23.959619] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:05.870 [2024-11-20 21:16:23.959644] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:05.870 [2024-11-20 21:16:23.959666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.959687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:05.870 [2024-11-20 21:16:23.959761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:34:05.870 [2024-11-20 21:16:23.959788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.959901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.870 [2024-11-20 21:16:23.959956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:05.870 [2024-11-20 21:16:23.959978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:34:05.870 [2024-11-20 21:16:23.960048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.870 [2024-11-20 21:16:23.960180] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:05.870 [2024-11-20 21:16:23.960210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:05.870 [2024-11-20 21:16:23.960233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:05.870 [2024-11-20 21:16:23.960254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:05.870 [2024-11-20 21:16:23.960274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:05.870 [2024-11-20 21:16:23.960294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:05.870 [2024-11-20 21:16:23.960318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:05.870 [2024-11-20 21:16:23.960337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:05.870 [2024-11-20 21:16:23.960355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:05.870 [2024-11-20 21:16:23.960444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:05.870 [2024-11-20 21:16:23.960466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:05.870 [2024-11-20 21:16:23.960486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:05.870 [2024-11-20 21:16:23.960505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:05.870 [2024-11-20 21:16:23.960524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:05.870 [2024-11-20 21:16:23.960544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:05.870 [2024-11-20 21:16:23.960565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:05.870 [2024-11-20 21:16:23.960583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:05.870 [2024-11-20 21:16:23.960602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:05.870 [2024-11-20 21:16:23.960621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:05.870 [2024-11-20 21:16:23.960674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:05.870 [2024-11-20 21:16:23.960686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:05.870 [2024-11-20 21:16:23.960695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:05.870 [2024-11-20 21:16:23.960708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:05.870 [2024-11-20 21:16:23.960716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:05.870 [2024-11-20 21:16:23.960724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:05.870 [2024-11-20 21:16:23.960731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:05.870 [2024-11-20 21:16:23.960738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:05.870 [2024-11-20 21:16:23.960763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:05.870 [2024-11-20 21:16:23.960772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:05.870 [2024-11-20 21:16:23.960781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:05.870 [2024-11-20 21:16:23.960789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:05.870 [2024-11-20 21:16:23.960797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:05.870 [2024-11-20 21:16:23.960805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:05.870 [2024-11-20 21:16:23.960817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:05.870 [2024-11-20 21:16:23.960825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:05.870 [2024-11-20 21:16:23.960832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:05.870 [2024-11-20 21:16:23.960840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:05.870 [2024-11-20 21:16:23.960848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:05.870 [2024-11-20 21:16:23.960860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:05.870 [2024-11-20 21:16:23.960868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:05.871 [2024-11-20 21:16:23.960874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:05.871 [2024-11-20 21:16:23.960882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:05.871 [2024-11-20 21:16:23.960889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:05.871 [2024-11-20 21:16:23.960896] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:05.871 [2024-11-20 21:16:23.960905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:05.871 [2024-11-20 21:16:23.960914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:05.871 [2024-11-20 21:16:23.960922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:05.871 [2024-11-20 21:16:23.960934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:05.871 [2024-11-20 21:16:23.960941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:05.871 [2024-11-20 21:16:23.960949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:05.871 [2024-11-20 21:16:23.960956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:05.871 [2024-11-20 21:16:23.960962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:05.871 [2024-11-20 21:16:23.960972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:05.871 [2024-11-20 21:16:23.960984] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:05.871 [2024-11-20 21:16:23.960997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:05.871 [2024-11-20 21:16:23.961014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:05.871 [2024-11-20 21:16:23.961021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:05.871 [2024-11-20 21:16:23.961029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:05.871 [2024-11-20 21:16:23.961037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:05.871 [2024-11-20 21:16:23.961047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:05.871 [2024-11-20 21:16:23.961054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:05.871 [2024-11-20 21:16:23.961062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:05.871 [2024-11-20 21:16:23.961069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:05.871 [2024-11-20 21:16:23.961076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:05.871 [2024-11-20 21:16:23.961084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:05.871 [2024-11-20 21:16:23.961093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:05.871 [2024-11-20 21:16:23.961101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:05.871 [2024-11-20 21:16:23.961109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:05.871 [2024-11-20 21:16:23.961124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:05.871 [2024-11-20 21:16:23.961132] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:05.871 [2024-11-20 21:16:23.961148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:05.871 [2024-11-20 21:16:23.961157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:05.871 [2024-11-20 21:16:23.961164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:05.871 [2024-11-20 21:16:23.961174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:05.871 [2024-11-20 21:16:23.961182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:05.871 [2024-11-20 21:16:23.961191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.871 [2024-11-20 21:16:23.961199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:05.871 [2024-11-20 21:16:23.961208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:34:05.871 [2024-11-20 21:16:23.961218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.871 [2024-11-20 21:16:23.975619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.871 [2024-11-20 21:16:23.975828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:05.871 [2024-11-20 21:16:23.975865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.343 ms 00:34:05.871 [2024-11-20 21:16:23.975874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:05.871 [2024-11-20 21:16:23.975968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:05.871 [2024-11-20 21:16:23.975978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:05.871 [2024-11-20 21:16:23.975988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:34:05.871 [2024-11-20 21:16:23.975997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.000381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.000461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:06.132 [2024-11-20 21:16:24.000481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.322 ms 00:34:06.132 [2024-11-20 21:16:24.000503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.000566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.000583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:06.132 [2024-11-20 21:16:24.000597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:06.132 [2024-11-20 21:16:24.000611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.000800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.000819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:06.132 [2024-11-20 21:16:24.000840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:34:06.132 [2024-11-20 21:16:24.000851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.001037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.001053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:06.132 [2024-11-20 21:16:24.001074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:34:06.132 [2024-11-20 21:16:24.001086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.013024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.013073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:06.132 [2024-11-20 21:16:24.013093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.909 ms 00:34:06.132 [2024-11-20 21:16:24.013106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.013256] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:06.132 [2024-11-20 21:16:24.013271] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:06.132 [2024-11-20 21:16:24.013284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.013294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:06.132 [2024-11-20 21:16:24.013306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:34:06.132 [2024-11-20 21:16:24.013320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.025649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.025886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:06.132 [2024-11-20 21:16:24.025910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.306 ms 00:34:06.132 [2024-11-20 21:16:24.025920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.026071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.026091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:06.132 [2024-11-20 21:16:24.026101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:34:06.132 [2024-11-20 21:16:24.026110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.026171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.026183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:06.132 [2024-11-20 21:16:24.026197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:06.132 [2024-11-20 21:16:24.026205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.026527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.026547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:06.132 [2024-11-20 21:16:24.026564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:34:06.132 [2024-11-20 21:16:24.026574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.026592] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:06.132 [2024-11-20 21:16:24.026606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.026618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:06.132 [2024-11-20 21:16:24.026630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:34:06.132 [2024-11-20 21:16:24.026640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.037540] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:06.132 [2024-11-20 21:16:24.037877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.037896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:06.132 [2024-11-20 21:16:24.037907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.218 ms 00:34:06.132 [2024-11-20 21:16:24.037916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.040475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.040513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:06.132 [2024-11-20 21:16:24.040525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:34:06.132 [2024-11-20 21:16:24.040534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.040622] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:06.132 [2024-11-20 21:16:24.041408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.041516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:06.132 [2024-11-20 21:16:24.041575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.811 ms 00:34:06.132 [2024-11-20 21:16:24.041601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.041661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.041687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:06.132 [2024-11-20 21:16:24.041699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:06.132 [2024-11-20 21:16:24.041708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.041790] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:06.132 [2024-11-20 21:16:24.041812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.041821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:06.132 [2024-11-20 21:16:24.041836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:34:06.132 [2024-11-20 21:16:24.041845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.049360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.132 [2024-11-20 21:16:24.049415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:06.132 [2024-11-20 21:16:24.049427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.478 ms 00:34:06.132 [2024-11-20 21:16:24.049442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.132 [2024-11-20 21:16:24.049538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:06.133 [2024-11-20 21:16:24.049550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:06.133 [2024-11-20 21:16:24.049559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:34:06.133 [2024-11-20 21:16:24.049568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:06.133 [2024-11-20 21:16:24.051033] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 97.056 ms, result 0 00:34:07.518  [2024-11-20T21:16:26.576Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-20T21:16:27.518Z] Copying: 23/1024 [MB] (11 MBps) [2024-11-20T21:16:28.465Z] Copying: 39/1024 [MB] (15 MBps) [2024-11-20T21:16:29.407Z] Copying: 49/1024 [MB] (10 MBps) [2024-11-20T21:16:30.351Z] Copying: 60/1024 [MB] (10 MBps) [2024-11-20T21:16:31.296Z] Copying: 71/1024 [MB] (10 MBps) [2024-11-20T21:16:32.686Z] Copying: 83/1024 [MB] (12 MBps) [2024-11-20T21:16:33.259Z] Copying: 101/1024 [MB] (18 MBps) [2024-11-20T21:16:34.649Z] Copying: 112/1024 [MB] (11 MBps) [2024-11-20T21:16:35.596Z] Copying: 123/1024 [MB] (10 MBps) [2024-11-20T21:16:36.537Z] Copying: 138/1024 [MB] (14 MBps) [2024-11-20T21:16:37.480Z] Copying: 150/1024 [MB] (12 MBps) [2024-11-20T21:16:38.422Z] Copying: 166/1024 [MB] (16 MBps) [2024-11-20T21:16:39.411Z] Copying: 178/1024 [MB] (11 MBps) [2024-11-20T21:16:40.352Z] Copying: 192/1024 [MB] (14 MBps) [2024-11-20T21:16:41.296Z] Copying: 203/1024 [MB] (10 MBps) [2024-11-20T21:16:42.678Z] Copying: 215/1024 [MB] (11 MBps) [2024-11-20T21:16:43.621Z] Copying: 232/1024 [MB] (16 MBps) [2024-11-20T21:16:44.571Z] Copying: 251/1024 [MB] (19 MBps) [2024-11-20T21:16:45.514Z] Copying: 267/1024 [MB] (16 MBps) [2024-11-20T21:16:46.456Z] Copying: 285/1024 [MB] (18 MBps) [2024-11-20T21:16:47.400Z] Copying: 300/1024 [MB] (14 MBps) [2024-11-20T21:16:48.344Z] Copying: 314/1024 [MB] (14 MBps) [2024-11-20T21:16:49.286Z] Copying: 332/1024 [MB] (17 MBps) [2024-11-20T21:16:50.281Z] Copying: 349/1024 [MB] (17 MBps) [2024-11-20T21:16:51.670Z] Copying: 364/1024 [MB] (15 MBps) [2024-11-20T21:16:52.612Z] Copying: 376/1024 [MB] (12 MBps) [2024-11-20T21:16:53.553Z] Copying: 387/1024 [MB] (10 MBps) [2024-11-20T21:16:54.499Z] Copying: 400/1024 [MB] (13 MBps) [2024-11-20T21:16:55.441Z] Copying: 412/1024 [MB] (11 MBps) [2024-11-20T21:16:56.382Z] Copying: 423/1024 [MB] (11 MBps) [2024-11-20T21:16:57.327Z] Copying: 436/1024 [MB] (13 MBps) [2024-11-20T21:16:58.272Z] Copying: 447/1024 [MB] (10 MBps) [2024-11-20T21:16:59.659Z] Copying: 457/1024 [MB] (10 MBps) [2024-11-20T21:17:00.593Z] Copying: 468/1024 [MB] (10 MBps) [2024-11-20T21:17:01.530Z] Copying: 479/1024 [MB] (11 MBps) [2024-11-20T21:17:02.467Z] Copying: 490/1024 [MB] (10 MBps) [2024-11-20T21:17:03.405Z] Copying: 503/1024 [MB] (13 MBps) [2024-11-20T21:17:04.348Z] Copying: 515/1024 [MB] (12 MBps) [2024-11-20T21:17:05.282Z] Copying: 527/1024 [MB] (11 MBps) [2024-11-20T21:17:06.662Z] Copying: 540/1024 [MB] (12 MBps) [2024-11-20T21:17:07.599Z] Copying: 552/1024 [MB] (12 MBps) [2024-11-20T21:17:08.533Z] Copying: 567/1024 [MB] (14 MBps) [2024-11-20T21:17:09.471Z] Copying: 581/1024 [MB] (14 MBps) [2024-11-20T21:17:10.412Z] Copying: 601/1024 [MB] (20 MBps) [2024-11-20T21:17:11.355Z] Copying: 625/1024 [MB] (23 MBps) [2024-11-20T21:17:12.297Z] Copying: 646/1024 [MB] (20 MBps) [2024-11-20T21:17:13.675Z] Copying: 657/1024 [MB] (10 MBps) [2024-11-20T21:17:14.618Z] Copying: 674/1024 [MB] (17 MBps) [2024-11-20T21:17:15.558Z] Copying: 689/1024 [MB] (15 MBps) [2024-11-20T21:17:16.499Z] Copying: 701/1024 [MB] (11 MBps) [2024-11-20T21:17:17.439Z] Copying: 713/1024 [MB] (11 MBps) [2024-11-20T21:17:18.399Z] Copying: 724/1024 [MB] (11 MBps) [2024-11-20T21:17:19.403Z] Copying: 735/1024 [MB] (10 MBps) [2024-11-20T21:17:20.340Z] Copying: 745/1024 [MB] (10 MBps) [2024-11-20T21:17:21.281Z] Copying: 757/1024 [MB] (12 MBps) [2024-11-20T21:17:22.659Z] Copying: 769/1024 [MB] (11 MBps) [2024-11-20T21:17:23.601Z] Copying: 783/1024 [MB] (14 MBps) [2024-11-20T21:17:24.536Z] Copying: 795/1024 [MB] (12 MBps) [2024-11-20T21:17:25.479Z] Copying: 807/1024 [MB] (12 MBps) [2024-11-20T21:17:26.421Z] Copying: 819/1024 [MB] (11 MBps) [2024-11-20T21:17:27.362Z] Copying: 830/1024 [MB] (11 MBps) [2024-11-20T21:17:28.320Z] Copying: 844/1024 [MB] (13 MBps) [2024-11-20T21:17:29.264Z] Copying: 855/1024 [MB] (11 MBps) [2024-11-20T21:17:30.643Z] Copying: 866/1024 [MB] (11 MBps) [2024-11-20T21:17:31.587Z] Copying: 878/1024 [MB] (11 MBps) [2024-11-20T21:17:32.529Z] Copying: 897/1024 [MB] (19 MBps) [2024-11-20T21:17:33.465Z] Copying: 909/1024 [MB] (11 MBps) [2024-11-20T21:17:34.398Z] Copying: 920/1024 [MB] (11 MBps) [2024-11-20T21:17:35.339Z] Copying: 934/1024 [MB] (13 MBps) [2024-11-20T21:17:36.277Z] Copying: 946/1024 [MB] (12 MBps) [2024-11-20T21:17:37.655Z] Copying: 958/1024 [MB] (12 MBps) [2024-11-20T21:17:38.589Z] Copying: 969/1024 [MB] (10 MBps) [2024-11-20T21:17:39.523Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-20T21:17:40.458Z] Copying: 993/1024 [MB] (13 MBps) [2024-11-20T21:17:41.398Z] Copying: 1006/1024 [MB] (12 MBps) [2024-11-20T21:17:41.969Z] Copying: 1017/1024 [MB] (10 MBps) [2024-11-20T21:17:41.969Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-20 21:17:41.934282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.850 [2024-11-20 21:17:41.934382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:23.850 [2024-11-20 21:17:41.934401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:23.850 [2024-11-20 21:17:41.934416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.850 [2024-11-20 21:17:41.934442] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:23.850 [2024-11-20 21:17:41.935451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.850 [2024-11-20 21:17:41.935495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:23.850 [2024-11-20 21:17:41.935509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:35:23.850 [2024-11-20 21:17:41.935519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.850 [2024-11-20 21:17:41.935797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.850 [2024-11-20 21:17:41.935808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:23.850 [2024-11-20 21:17:41.935819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:35:23.850 [2024-11-20 21:17:41.935828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.850 [2024-11-20 21:17:41.935869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.850 [2024-11-20 21:17:41.935879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:23.850 [2024-11-20 21:17:41.935887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:35:23.850 [2024-11-20 21:17:41.935896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.850 [2024-11-20 21:17:41.935968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.850 [2024-11-20 21:17:41.935979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:23.850 [2024-11-20 21:17:41.935991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:35:23.851 [2024-11-20 21:17:41.936023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.851 [2024-11-20 21:17:41.936040] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:23.851 [2024-11-20 21:17:41.936057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:23.851 [2024-11-20 21:17:41.936069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:23.851 [2024-11-20 21:17:41.936639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:23.852 [2024-11-20 21:17:41.936932] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:23.852 [2024-11-20 21:17:41.936946] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5f11f99b-a93a-40e6-9e26-852bcbc4a86c 00:35:23.852 [2024-11-20 21:17:41.936956] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:23.852 [2024-11-20 21:17:41.936964] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3104 00:35:23.852 [2024-11-20 21:17:41.936971] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3072 00:35:23.852 [2024-11-20 21:17:41.936980] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:35:23.852 [2024-11-20 21:17:41.936988] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:23.852 [2024-11-20 21:17:41.936999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:23.852 [2024-11-20 21:17:41.937007] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:23.852 [2024-11-20 21:17:41.937014] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:23.852 [2024-11-20 21:17:41.937021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:23.852 [2024-11-20 21:17:41.937029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.852 [2024-11-20 21:17:41.937039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:23.852 [2024-11-20 21:17:41.937048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:35:23.852 [2024-11-20 21:17:41.937057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.852 [2024-11-20 21:17:41.940276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.852 [2024-11-20 21:17:41.940318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:23.852 [2024-11-20 21:17:41.940330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:35:23.852 [2024-11-20 21:17:41.940344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.852 [2024-11-20 21:17:41.940501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:23.852 [2024-11-20 21:17:41.940516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:23.852 [2024-11-20 21:17:41.940526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:35:23.852 [2024-11-20 21:17:41.940535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.852 [2024-11-20 21:17:41.952045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.852 [2024-11-20 21:17:41.952109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:23.852 [2024-11-20 21:17:41.952122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.852 [2024-11-20 21:17:41.952131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.852 [2024-11-20 21:17:41.952214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.852 [2024-11-20 21:17:41.952223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:23.852 [2024-11-20 21:17:41.952233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.852 [2024-11-20 21:17:41.952242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.852 [2024-11-20 21:17:41.952315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.852 [2024-11-20 21:17:41.952335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:23.852 [2024-11-20 21:17:41.952347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.852 [2024-11-20 21:17:41.952355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:23.852 [2024-11-20 21:17:41.952375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:23.852 [2024-11-20 21:17:41.952384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:23.852 [2024-11-20 21:17:41.952393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:23.852 [2024-11-20 21:17:41.952401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.113 [2024-11-20 21:17:41.972535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:24.113 [2024-11-20 21:17:41.972612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:24.113 [2024-11-20 21:17:41.972625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:24.113 [2024-11-20 21:17:41.972635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.113 [2024-11-20 21:17:41.988966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:24.113 [2024-11-20 21:17:41.989033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:24.113 [2024-11-20 21:17:41.989059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:24.113 [2024-11-20 21:17:41.989069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.114 [2024-11-20 21:17:41.989133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:24.114 [2024-11-20 21:17:41.989143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:24.114 [2024-11-20 21:17:41.989153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:24.114 [2024-11-20 21:17:41.989167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.114 [2024-11-20 21:17:41.989209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:24.114 [2024-11-20 21:17:41.989219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:24.114 [2024-11-20 21:17:41.989229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:24.114 [2024-11-20 21:17:41.989239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.114 [2024-11-20 21:17:41.989303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:24.114 [2024-11-20 21:17:41.989320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:24.114 [2024-11-20 21:17:41.989329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:24.114 [2024-11-20 21:17:41.989337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.114 [2024-11-20 21:17:41.989375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:24.114 [2024-11-20 21:17:41.989386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:24.114 [2024-11-20 21:17:41.989396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:24.114 [2024-11-20 21:17:41.989405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.114 [2024-11-20 21:17:41.989460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:24.114 [2024-11-20 21:17:41.989470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:24.114 [2024-11-20 21:17:41.989480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:24.114 [2024-11-20 21:17:41.989489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.114 [2024-11-20 21:17:41.989551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:24.114 [2024-11-20 21:17:41.989562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:24.114 [2024-11-20 21:17:41.989572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:24.114 [2024-11-20 21:17:41.989582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:24.114 [2024-11-20 21:17:41.989817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 55.421 ms, result 0 00:35:24.373 00:35:24.373 00:35:24.373 21:17:42 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:26.914 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:26.914 Process with pid 94746 is not found 00:35:26.914 Remove shared memory files 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94746 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 94746 ']' 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 94746 00:35:26.914 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (94746) - No such process 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 94746 is not found' 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_band_md /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_l2p_l1 /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_l2p_l2 /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_l2p_l2_ctx /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_nvc_md /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_p2l_pool /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_sb /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_sb_shm /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_trim_bitmap /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_trim_log /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_trim_md /dev/hugepages/ftl_5f11f99b-a93a-40e6-9e26-852bcbc4a86c_vmap 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:26.914 ************************************ 00:35:26.914 END TEST ftl_restore_fast 00:35:26.914 ************************************ 00:35:26.914 00:35:26.914 real 5m1.020s 00:35:26.914 user 4m48.524s 00:35:26.914 sys 0m11.910s 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.914 21:17:44 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:26.914 Process with pid 86077 is not found 00:35:26.914 21:17:44 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:26.914 21:17:44 ftl -- ftl/ftl.sh@14 -- # killprocess 86077 00:35:26.914 21:17:44 ftl -- common/autotest_common.sh@954 -- # '[' -z 86077 ']' 00:35:26.914 21:17:44 ftl -- common/autotest_common.sh@958 -- # kill -0 86077 00:35:26.914 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86077) - No such process 00:35:26.914 21:17:44 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 86077 is not found' 00:35:26.914 21:17:44 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:26.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.914 21:17:44 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97822 00:35:26.914 21:17:44 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97822 00:35:26.914 21:17:44 ftl -- common/autotest_common.sh@835 -- # '[' -z 97822 ']' 00:35:26.914 21:17:44 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.914 21:17:44 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:26.914 21:17:44 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:26.914 21:17:44 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.914 21:17:44 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:26.914 21:17:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:26.914 [2024-11-20 21:17:44.777721] Starting SPDK v25.01-pre git sha1 557f022f6 / DPDK 22.11.4 initialization... 00:35:26.914 [2024-11-20 21:17:44.778114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97822 ] 00:35:26.914 [2024-11-20 21:17:44.925410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.914 [2024-11-20 21:17:44.967038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.856 21:17:45 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.856 21:17:45 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:27.856 21:17:45 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:27.856 nvme0n1 00:35:27.856 21:17:45 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:27.856 21:17:45 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:27.856 21:17:45 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:28.118 21:17:46 ftl -- ftl/common.sh@28 -- # stores=0f57cb06-7b33-49a3-b306-0b43b406d1df 00:35:28.118 21:17:46 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:28.118 21:17:46 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0f57cb06-7b33-49a3-b306-0b43b406d1df 00:35:28.378 21:17:46 ftl -- ftl/ftl.sh@23 -- # killprocess 97822 00:35:28.378 21:17:46 ftl -- common/autotest_common.sh@954 -- # '[' -z 97822 ']' 00:35:28.378 21:17:46 ftl -- common/autotest_common.sh@958 -- # kill -0 97822 00:35:28.378 21:17:46 ftl -- common/autotest_common.sh@959 -- # uname 00:35:28.378 21:17:46 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.378 21:17:46 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97822 00:35:28.378 killing process with pid 97822 00:35:28.378 21:17:46 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:28.378 21:17:46 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:28.378 21:17:46 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97822' 00:35:28.378 21:17:46 ftl -- common/autotest_common.sh@973 -- # kill 97822 00:35:28.378 21:17:46 ftl -- common/autotest_common.sh@978 -- # wait 97822 00:35:28.962 21:17:46 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:28.962 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:28.962 Waiting for block devices as requested 00:35:28.962 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:29.294 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:29.294 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:29.294 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:34.584 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:34.584 21:17:52 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:35:34.584 Remove shared memory files 00:35:34.584 21:17:52 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:34.584 21:17:52 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:35:34.584 21:17:52 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:35:34.584 21:17:52 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:35:34.584 21:17:52 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:34.584 21:17:52 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:35:34.584 ************************************ 00:35:34.584 END TEST ftl 00:35:34.584 ************************************ 00:35:34.584 00:35:34.584 real 17m42.077s 00:35:34.584 user 19m35.946s 00:35:34.584 sys 1m21.503s 00:35:34.584 21:17:52 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.584 21:17:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:34.584 21:17:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:34.584 21:17:52 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:34.584 21:17:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:34.584 21:17:52 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:34.584 21:17:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:34.584 21:17:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:34.584 21:17:52 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:34.584 21:17:52 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:34.584 21:17:52 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:34.584 21:17:52 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:34.584 21:17:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:34.584 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:35:34.584 21:17:52 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:34.584 21:17:52 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:34.584 21:17:52 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:34.584 21:17:52 -- common/autotest_common.sh@10 -- # set +x 00:35:35.971 INFO: APP EXITING 00:35:35.971 INFO: killing all VMs 00:35:35.971 INFO: killing vhost app 00:35:35.971 INFO: EXIT DONE 00:35:36.540 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:36.801 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:35:36.801 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:35:36.801 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:35:36.801 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:35:37.376 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:37.638 Cleaning 00:35:37.638 Removing: /var/run/dpdk/spdk0/config 00:35:37.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:37.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:37.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:37.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:37.638 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:37.638 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:37.638 Removing: /var/run/dpdk/spdk0 00:35:37.638 Removing: /var/run/dpdk/spdk_pid68948 00:35:37.638 Removing: /var/run/dpdk/spdk_pid69106 00:35:37.638 Removing: /var/run/dpdk/spdk_pid69307 00:35:37.638 Removing: /var/run/dpdk/spdk_pid69389 00:35:37.638 Removing: /var/run/dpdk/spdk_pid69418 00:35:37.638 Removing: /var/run/dpdk/spdk_pid69529 00:35:37.638 Removing: /var/run/dpdk/spdk_pid69547 00:35:37.638 Removing: /var/run/dpdk/spdk_pid69730 00:35:37.638 Removing: /var/run/dpdk/spdk_pid69803 00:35:37.638 Removing: /var/run/dpdk/spdk_pid69883 00:35:37.638 Removing: /var/run/dpdk/spdk_pid69977 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70058 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70097 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70134 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70199 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70292 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70712 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70759 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70811 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70827 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70885 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70901 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70959 00:35:37.638 Removing: /var/run/dpdk/spdk_pid70975 00:35:37.638 Removing: /var/run/dpdk/spdk_pid71017 00:35:37.638 Removing: /var/run/dpdk/spdk_pid71035 00:35:37.638 Removing: /var/run/dpdk/spdk_pid71077 00:35:37.638 Removing: /var/run/dpdk/spdk_pid71095 00:35:37.638 Removing: /var/run/dpdk/spdk_pid71222 00:35:37.638 Removing: /var/run/dpdk/spdk_pid71264 00:35:37.638 Removing: /var/run/dpdk/spdk_pid71342 00:35:37.638 Removing: /var/run/dpdk/spdk_pid71503 00:35:37.899 Removing: /var/run/dpdk/spdk_pid71576 00:35:37.899 Removing: /var/run/dpdk/spdk_pid71607 00:35:37.899 Removing: /var/run/dpdk/spdk_pid72029 00:35:37.899 Removing: /var/run/dpdk/spdk_pid72121 00:35:37.899 Removing: /var/run/dpdk/spdk_pid72222 00:35:37.899 Removing: /var/run/dpdk/spdk_pid72264 00:35:37.899 Removing: /var/run/dpdk/spdk_pid72284 00:35:37.899 Removing: /var/run/dpdk/spdk_pid72364 00:35:37.899 Removing: /var/run/dpdk/spdk_pid72975 00:35:37.899 Removing: /var/run/dpdk/spdk_pid73006 00:35:37.899 Removing: /var/run/dpdk/spdk_pid73465 00:35:37.899 Removing: /var/run/dpdk/spdk_pid73558 00:35:37.899 Removing: /var/run/dpdk/spdk_pid73667 00:35:37.899 Removing: /var/run/dpdk/spdk_pid73709 00:35:37.899 Removing: /var/run/dpdk/spdk_pid73729 00:35:37.899 Removing: /var/run/dpdk/spdk_pid73755 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75579 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75705 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75709 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75721 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75766 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75770 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75782 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75827 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75831 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75843 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75882 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75886 00:35:37.899 Removing: /var/run/dpdk/spdk_pid75898 00:35:37.899 Removing: /var/run/dpdk/spdk_pid77285 00:35:37.899 Removing: /var/run/dpdk/spdk_pid77371 00:35:37.899 Removing: /var/run/dpdk/spdk_pid78767 00:35:37.899 Removing: /var/run/dpdk/spdk_pid80523 00:35:37.899 Removing: /var/run/dpdk/spdk_pid80580 00:35:37.899 Removing: /var/run/dpdk/spdk_pid80650 00:35:37.899 Removing: /var/run/dpdk/spdk_pid80749 00:35:37.899 Removing: /var/run/dpdk/spdk_pid80835 00:35:37.899 Removing: /var/run/dpdk/spdk_pid80925 00:35:37.899 Removing: /var/run/dpdk/spdk_pid80977 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81047 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81146 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81227 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81317 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81376 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81450 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81543 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81624 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81714 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81766 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81836 00:35:37.899 Removing: /var/run/dpdk/spdk_pid81934 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82015 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82105 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82163 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82231 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82300 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82363 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82461 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82546 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82635 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82693 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82756 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82825 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82888 00:35:37.899 Removing: /var/run/dpdk/spdk_pid82991 00:35:37.899 Removing: /var/run/dpdk/spdk_pid83071 00:35:37.899 Removing: /var/run/dpdk/spdk_pid83209 00:35:37.899 Removing: /var/run/dpdk/spdk_pid83471 00:35:37.900 Removing: /var/run/dpdk/spdk_pid83502 00:35:37.900 Removing: /var/run/dpdk/spdk_pid83947 00:35:37.900 Removing: /var/run/dpdk/spdk_pid84124 00:35:37.900 Removing: /var/run/dpdk/spdk_pid84222 00:35:37.900 Removing: /var/run/dpdk/spdk_pid84323 00:35:37.900 Removing: /var/run/dpdk/spdk_pid84364 00:35:37.900 Removing: /var/run/dpdk/spdk_pid84385 00:35:37.900 Removing: /var/run/dpdk/spdk_pid84683 00:35:37.900 Removing: /var/run/dpdk/spdk_pid84721 00:35:37.900 Removing: /var/run/dpdk/spdk_pid84772 00:35:37.900 Removing: /var/run/dpdk/spdk_pid85136 00:35:37.900 Removing: /var/run/dpdk/spdk_pid85281 00:35:37.900 Removing: /var/run/dpdk/spdk_pid86077 00:35:37.900 Removing: /var/run/dpdk/spdk_pid86198 00:35:37.900 Removing: /var/run/dpdk/spdk_pid86354 00:35:37.900 Removing: /var/run/dpdk/spdk_pid86446 00:35:37.900 Removing: /var/run/dpdk/spdk_pid86736 00:35:37.900 Removing: /var/run/dpdk/spdk_pid86985 00:35:37.900 Removing: /var/run/dpdk/spdk_pid87329 00:35:37.900 Removing: /var/run/dpdk/spdk_pid87505 00:35:37.900 Removing: /var/run/dpdk/spdk_pid87602 00:35:38.162 Removing: /var/run/dpdk/spdk_pid87638 00:35:38.162 Removing: /var/run/dpdk/spdk_pid87840 00:35:38.162 Removing: /var/run/dpdk/spdk_pid87854 00:35:38.162 Removing: /var/run/dpdk/spdk_pid87890 00:35:38.162 Removing: /var/run/dpdk/spdk_pid88159 00:35:38.162 Removing: /var/run/dpdk/spdk_pid88373 00:35:38.162 Removing: /var/run/dpdk/spdk_pid88939 00:35:38.162 Removing: /var/run/dpdk/spdk_pid89765 00:35:38.162 Removing: /var/run/dpdk/spdk_pid90237 00:35:38.162 Removing: /var/run/dpdk/spdk_pid91013 00:35:38.162 Removing: /var/run/dpdk/spdk_pid91161 00:35:38.162 Removing: /var/run/dpdk/spdk_pid91242 00:35:38.162 Removing: /var/run/dpdk/spdk_pid91758 00:35:38.162 Removing: /var/run/dpdk/spdk_pid91816 00:35:38.162 Removing: /var/run/dpdk/spdk_pid92503 00:35:38.162 Removing: /var/run/dpdk/spdk_pid93034 00:35:38.162 Removing: /var/run/dpdk/spdk_pid93837 00:35:38.162 Removing: /var/run/dpdk/spdk_pid93955 00:35:38.162 Removing: /var/run/dpdk/spdk_pid93991 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94044 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94089 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94141 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94319 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94394 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94451 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94529 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94558 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94614 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94746 00:35:38.162 Removing: /var/run/dpdk/spdk_pid94950 00:35:38.162 Removing: /var/run/dpdk/spdk_pid95529 00:35:38.162 Removing: /var/run/dpdk/spdk_pid96205 00:35:38.162 Removing: /var/run/dpdk/spdk_pid96995 00:35:38.162 Removing: /var/run/dpdk/spdk_pid97822 00:35:38.162 Clean 00:35:38.162 21:17:56 -- common/autotest_common.sh@1453 -- # return 0 00:35:38.162 21:17:56 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:38.162 21:17:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:38.162 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:35:38.162 21:17:56 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:38.162 21:17:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:38.162 21:17:56 -- common/autotest_common.sh@10 -- # set +x 00:35:38.162 21:17:56 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:38.423 21:17:56 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:38.423 21:17:56 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:38.423 21:17:56 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:38.423 21:17:56 -- spdk/autotest.sh@398 -- # hostname 00:35:38.423 21:17:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:38.423 geninfo: WARNING: invalid characters removed from testname! 00:36:05.010 21:18:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:06.920 21:18:25 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:09.451 21:18:27 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:11.354 21:18:29 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:13.262 21:18:31 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:15.804 21:18:33 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:18.348 21:18:36 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:18.348 21:18:36 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:18.348 21:18:36 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:18.348 21:18:36 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:18.348 21:18:36 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:18.348 21:18:36 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:18.348 + [[ -n 5762 ]] 00:36:18.348 + sudo kill 5762 00:36:18.358 [Pipeline] } 00:36:18.373 [Pipeline] // timeout 00:36:18.378 [Pipeline] } 00:36:18.392 [Pipeline] // stage 00:36:18.397 [Pipeline] } 00:36:18.411 [Pipeline] // catchError 00:36:18.421 [Pipeline] stage 00:36:18.424 [Pipeline] { (Stop VM) 00:36:18.438 [Pipeline] sh 00:36:18.725 + vagrant halt 00:36:21.275 ==> default: Halting domain... 00:36:26.644 [Pipeline] sh 00:36:26.930 + vagrant destroy -f 00:36:29.469 ==> default: Removing domain... 00:36:30.055 [Pipeline] sh 00:36:30.340 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:30.351 [Pipeline] } 00:36:30.366 [Pipeline] // stage 00:36:30.371 [Pipeline] } 00:36:30.385 [Pipeline] // dir 00:36:30.391 [Pipeline] } 00:36:30.408 [Pipeline] // wrap 00:36:30.416 [Pipeline] } 00:36:30.429 [Pipeline] // catchError 00:36:30.439 [Pipeline] stage 00:36:30.441 [Pipeline] { (Epilogue) 00:36:30.452 [Pipeline] sh 00:36:30.738 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:36.032 [Pipeline] catchError 00:36:36.034 [Pipeline] { 00:36:36.047 [Pipeline] sh 00:36:36.331 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:36.331 Artifacts sizes are good 00:36:36.342 [Pipeline] } 00:36:36.357 [Pipeline] // catchError 00:36:36.369 [Pipeline] archiveArtifacts 00:36:36.376 Archiving artifacts 00:36:36.469 [Pipeline] cleanWs 00:36:36.482 [WS-CLEANUP] Deleting project workspace... 00:36:36.482 [WS-CLEANUP] Deferred wipeout is used... 00:36:36.490 [WS-CLEANUP] done 00:36:36.492 [Pipeline] } 00:36:36.510 [Pipeline] // stage 00:36:36.516 [Pipeline] } 00:36:36.532 [Pipeline] // node 00:36:36.539 [Pipeline] End of Pipeline 00:36:36.584 Finished: SUCCESS